Scams - We all need to be aware

I actually had one of those (sort of) pulled on me, back when I lived in Florida. Mine involved being at a light, having someone step off the sidewalk, having the driver in front of me slam on the brakes, and then complain of back pain when I hit him at about 15 mph coming off the line.

The really crap part is that the police believed me, noted that it was something happening in the area, and I still took the insurance hit because we couldn't 100% prove it was a scam, so my insurance had to pay out.
I would recommend that everyone get a dashcam especially one that captures your speed. You may not ever need it, but it better to have it and not need it than to need it and not have it.
 
I would recommend that everyone get a dashcam especially one that captures your speed. You may not ever need it, but it better to have it and not need it than to need it and not have it.


If you go that route, you need to remember that a dash cam can work against you, as well as for you.
 
If you're doing the right thing, it won't be going against you.


You've seen too many police videos to fall for that.



I'm not saying not to do it. I'm just saying that, if you choose to do it, understand that there is a potential downside.
 
You've seen too many police videos to fall for that.



I'm not saying not to do it. I'm just saying that, if you choose to do it, understand that there is a potential downside.
If you're not speeding, running red lights, failing to use turn signals, it will all be captured for your protection. Cops will invent reasons to stop you. Video doesn't lie. Some cops do. A dashcam comes in handy on case of an accident too.


I'm getting my daughter and her family Attorney Shield as part of their Christmas presents. It's an app that gives you 24/7 access to a lawyer. I hope they never have to use it, but want them to have it just in case. Just another layer of protection for them.

 
If you're not speeding, running red lights, failing to use turn signals, it will all be captured for your protection. Cops will invent reasons to stop you. Video doesn't lie. Some cops do. A dashcam comes in handy on case of an accident too.


I'm getting my daughter and her family Attorney Shield as part of their Christmas presents. It's an app that gives you 24/7 access to a lawyer. I hope they never have to use it, but want them to have it just in case. Just another layer of protection for them.



I appreciate you pointing out the benefits of a dash cam for people, as that allows for an informed decision, and I'm not going to argue this with you. I know for certain that, just as with other forms of evidence, it can be used against you even if you've done nothing wrong (think of the famous "inconsistent statement" discussion in the videos warning you against speaking to the police), and I'll leave it at that. So we'll just have to agree to disagree here.
 
@everyone

Artificial intelligence: authentic scams.

AI tools are being maliciously used to send “hyper-personalized emails” that are so sophisticated victims can’t identify that they’re fraudulent.

According to the Financial Times, AI bots are compiling information about unsuspecting email users by analyzing their “social media activity to determine what topics they may be most likely to respond to.”

Scam emails are subsequently sent to the users that appear as if they’re composed by family and friends. Because of the personal nature of the email, the recipient is unable to identify that it is actually nefarious.

“This is getting worse and it’s getting very personal, and this is why we suspect AI is behind a lot of it,” Kristy Kelly, the chief information security officer at the insurance agency Beazley, told the outlet.

“We’re starting to see very targeted attacks that have scraped an immense amount of information about a person.”

Malicious actors use artificial intelligence to scrape online data about potential targets as well as pen convincing fraudulent emails. Kaspars Grinvalds – stock.adobe.com

Malicious actors use artificial intelligence to scrape online data about potential targets as well as pen convincing fraudulent emails. Kaspars Grinvalds – stock.adobe.com
“AI is giving cybercriminals the ability to easily create more personalized and convincing emails and messages that look like they’re from trusted sources,” security company McAfee recently warned. “These types of attacks are expected to grow in sophistication and frequency.”

While many savvy internet users now know the telltale signs of traditional email scams, it’s much harder to tell when these new personalized messages are fraudulent.

Gmail, Outlook, and Apple Mail do not yet have adequate “defenses in place to stop this,” Forbes reports.

“Social engineering,” ESET cybersecurity advisor Jake Moore told Forbes “has an impressive hold over people due to human interaction but now as AI can apply the same tactics from a technological perspective, it is becoming harder to mitigate unless people really start to think about reducing what they post online.”

Experts warn that the phishing emails are so advanced that they can slip past security measures and dupe users. Prostock-studio – stock.adobe.com

Experts warn that the phishing emails are so advanced that they can slip past security measures and dupe users. Prostock-studio – stock.adobe.com
Bad actors are also able to utilize AI to write convincing phishing emails that mimic banks, accounts and more. According to data from the US Cybersecurity and Infrastructure Security Agency and cited by the Financial Times, over 90% of successful breaches start with phishing messages.

These highly sophisticated scams can bypass the security measures, and inbox filters meant to screen emails for scams could be unable to identify them, Nadezda Demidova, cybercrime security researcher at eBay, told The Financial Times.

“The availability of generative AI tools lowers the entry threshold for advanced cybercrime,” Demidova said.

Users have been urged to bolster online account security and to verify the legitimacy of links and their senders before clicking. PhotoGranary – stock.adobe.com

Users have been urged to bolster online account security and to verify the legitimacy of links and their senders before clicking. PhotoGranary – stock.adobe.com
McAfee warned that 2025 would usher in a wave of advanced AI used to “craft increasingly sophisticated and personalized cyber scams,” according to a recent blog post.

Software company Check Point issued a similar prediction for the new year.

“In 2025, AI will drive both attacks and protections,” Dr. Dorit Dor, the company’s chief technology officer, said in a statement. “Security teams will rely on AI-powered tools tailored to their unique environments, but adversaries will respond with increasingly sophisticated, AI-driven phishing and deepfake campaigns.”

To protect themselves, users should never click on links within emails unless they can verify the legitimacy of the sender. Experts also recommend bolstering account security with two-factor authentication and strong passwords or passkeys.

“Ultimately,” Moore told Forbes, “whether AI has enhanced an attack or not, we need to remind people about these increasingly more sophisticated attacks and how to think twice before transferring money or divulging personal information when requested — however believable the request may seem.”
 
Back
Top