Protection Archives - Panda Security https://www.pandasecurity.com/en/mediacenter/tag/protection/ Clear tips, up-to-date news and practical solutions to protect your family and devices. Learn how to avoid online threats and stay one step ahead with Panda Security. Thu, 23 Oct 2025 12:20:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://www.pandasecurity.com/en/mediacenter/src/uploads/2016/11/cropped-favicon-1-32x32.png Protection Archives - Panda Security https://www.pandasecurity.com/en/mediacenter/tag/protection/ 32 32 Smart Glasses: Cool Tech or a Privacy Threat? https://www.pandasecurity.com/en/mediacenter/smart-glasses-cool-tech-or-a-privacy-threat/ https://www.pandasecurity.com/en/mediacenter/smart-glasses-cool-tech-or-a-privacy-threat/#respond Fri, 24 Oct 2025 07:00:16 +0000 https://www.pandasecurity.com/en/mediacenter/?p=34234 smart-glasses-cool-tech-or-a-privacy-threat

Smart wearables are nothing new – an Apple Watch barely attracts attention these days. And now smart glasses represent the latest frontier in wearable technology,…

The post Smart Glasses: Cool Tech or a Privacy Threat? appeared first on Panda Security Mediacenter.

]]>

Smart wearables are nothing new – an Apple Watch barely attracts attention these days. And now smart glasses represent the latest frontier in wearable technology, blending digital convenience with everyday eyewear. 

The newly released Meta Ray-Ban Display glasses, feature an in-lens screen and gesture controls, showing how close these devices are to mainstream adoption. However, as sleek and cool as this technology appears, mounting privacy and safety concerns raise important questions about what they mean for society in general. Smart glasses demonstrate exciting capabilities – and raise serious questions about potential privacy risks.

Key takeaways

  • Smart glasses like Meta Ray-Ban Display offer discreet access to messages, navigation and multimedia without pulling out your phone.
  • The near-invisible display and discreet recording features raise concerns about unnoticeable surveillance and data privacy.
  • Social etiquette and legal frameworks around smart glasses are still evolving, with calls for clear consent and transparency.
  • Privacy advocates warn about new risks with facial recognition and data storage tied to smart glasses.
  • Users must weigh the benefits of augmented convenience against the ethical responsibilities of wearable tech use.

What are smart glasses?

Smart glasses are eyewear embedded with digital displays and connectivity features that relay information directly to the wearer. Unlike traditional smartphones or smartwatches, they offer a hands-free, discreet interface — often overlaying data in the user’s line of sight. The latest Meta Ray-Ban Display glasses incorporate a nearly invisible heads-up display and gesture controls for messaging, media playback, and turn-by-turn navigation.

Why smart glasses are exciting

Smart glasses unlock new ways to interact with digital content without interrupting real-world activities. They enable users to read texts, take calls, view social media, record video and even get real-time transcription of conversations. The Meta Ray-Ban Display’s subtle design makes it look like regular glasses, removing previous stigma and “Glasshole” backlash seen with earlier bulky wearables like Google Glass.

Privacy and safety concerns

Despite their benefits, smart glasses introduce serious privacy dilemmas. The unobtrusive screen and camera can record or stream without others noticing, potentially infringing on bystanders’ rights. Most smart glasses are fitted with LEDs to indicate the camera is in use, but in reality this offers little or no protection to the general public.

Facial recognition technology under consideration for future versions makes these concerns more urgent, threatening pervasive surveillance and misuse of personal data. The fact that voice commands and visual data sent to cloud servers are saved without opt-out options intensifies worries about data control and security. It is also unclear how companies like Meta will use this data.

Driving safety is yet another debate, as some glasses offer navigation displays. This technology could distract drivers in the same way using a phone does. Regulation is currently sparse, with policymakers still catching up to the technology’s rapid development.

Practicing responsible use

Industry experts and ethicists recommend adopting clear social etiquette for smart glasses use: always ask consent before recording, use visible indicators when capturing footage, and remain aware of your surroundings. 

Smart glasses wearers should also prioritize protecting their own and others’ data by configuring privacy settings and understanding device data policies. Manufacturers must also build transparency and ethical responsibility into their products to build trust in this new category of wearable technology.

Conclusion

Smart glasses stand at a crossroads between transformative convenience and profound privacy challenges. While devices like Meta’s Ray-Ban Display make the technology appealing and accessible, careful consideration of ethical use and robust privacy protections is essential to prevent misuse. 

For consumers, the key will be embracing this innovation with awareness and respect for social norms. Smart glasses should enhance their lives without undermining the privacy of others. The future may indeed be wearable, but it must also be responsible.

The post Smart Glasses: Cool Tech or a Privacy Threat? appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/smart-glasses-cool-tech-or-a-privacy-threat/feed/ 0
Google Partners with StopNCII to Block Revenge Porn https://www.pandasecurity.com/en/mediacenter/google-partners-with-stopncii-to-block-revenge-porn/ https://www.pandasecurity.com/en/mediacenter/google-partners-with-stopncii-to-block-revenge-porn/#respond Wed, 22 Oct 2025 07:00:06 +0000 https://www.pandasecurity.com/en/mediacenter/?p=34226 google-partners-with-stopncii-to-block-revenge-porn

Google has partnered with UK nonprofit StopNCII to enhance its defenses against non-consensual intimate imagery (NCII), commonly known as revenge porn. This collaboration uses digital…

The post Google Partners with StopNCII to Block Revenge Porn appeared first on Panda Security Mediacenter.

]]>

Google has partnered with UK nonprofit StopNCII to enhance its defenses against non-consensual intimate imagery (NCII), commonly known as revenge porn. This collaboration uses digital fingerprinting technology to help victims proactively protect their privacy. The system blocks images and videos from appearing in Google Search results and across other major platforms, all while ensuring the actual image never leaves the user’s device.

The technology relies on user-submitted image hashes, empowering individuals to take control before abuse occurs. This article explains how the system works, who can benefit, and its limitations.

Key takeaways

  • Google and StopNCII have partnered to detect and block revenge porn using hash technology
  • The system is user-controlled – individuals must proactively upload image hashes to protect themselves
  • Your actual photo never leaves your device; only a unique digital fingerprint (hash) is shared
  • Protection now extends to Google Search, joining platforms like Meta, Bing, TikTok, and Reddit
  • The system works only for images you possess and does not cover AI-generated content or audio

How can I stop revenge porn from spreading?

StopNCII allows individuals to create a private case where they can select intimate images they wish to protect. Their system generates a unique digital fingerprint, known as a hash, from each image. This hash is mathematically derived from the image’s data but cannot be reversed to recreate the original photo. In the unlikely event a hacker intercepts and decrypts your hash, all they will see is a long string of letters and numbers which don’t actually “do” anything.

The hash is uploaded to StopNCII’s database and shared with partner platforms, including Google. When a matching image is uploaded online, the websites and services partnered with StopNCII will detect the hash and block or remove the picture automatically.

Protected images never leave your device and StopNCII never actually “sees” your photographs.

How do image hashes work?

You can think of a hash being like a digital fingerprint for your photo. Each image produces a unique hash through a cryptographic process. If even one pixel changes, the hash changes completely. This ensures precise identification without storing or sharing the actual image. 

StopNCII uses this technology to protect privacy while enabling effective detection across platforms.

Who can use StopNCII hash protections?

Any individual aged 18 or older who possesses nude, semi-nude, or sexually explicit images and videos they fear might be shared non-consensually can use StopNCII. The service is free and available to anyone, anywhere in the world. 

Since 2015, StopNCII’s partner, the Revenge Porn Helpline, has removed over 300,000 NCII items using image hashing with a 90% success rate.

What are the limitations?

Unfortunately, protection is not universal. The system only works for images you have in your possession. AI-generated nudes, audio recordings, or text messages are not covered. 

StopNCII’s image hashing only works with partner platforms like Google, Bing, Meta, X, and TikTok. This means that non-partner sites may not detect or remove flagged content. However, registering sensitive images with StopNCII will help to limit or prevent spread – particularly across the most popular online services.

Take control of your digital privacy now

Google’s integration with StopNCII marks a major step in proactive online protection. By turning image protection into a user-controlled process, the system empowers individuals to safeguard their dignity before harm occurs.

As regulators step up efforts to combat sexual abuse material, StopNCII offers a powerful, user-centric solution that does not rely on invasive content scanning alternatives. You can start protecting your sensitive images with the StopNCII service right now by creating a case.

The post Google Partners with StopNCII to Block Revenge Porn appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/google-partners-with-stopncii-to-block-revenge-porn/feed/ 0
AWS outage: what it reveals about the fragility of cloud cybersecurity https://www.pandasecurity.com/en/mediacenter/aws-outage-cybersecurity-risk/ https://www.pandasecurity.com/en/mediacenter/aws-outage-cybersecurity-risk/#respond Mon, 20 Oct 2025 11:10:11 +0000 https://www.pandasecurity.com/en/mediacenter/?p=34215 aws-outage-cybersecurity-risk

The fall of the world’s leading cloud infrastructure platform has caused a blackout across websites, apps, and social networks without contingency plans. Lacking a plan…

The post AWS outage: what it reveals about the fragility of cloud cybersecurity appeared first on Panda Security Mediacenter.

]]>

The fall of the world’s leading cloud infrastructure platform has caused a blackout across websites, apps, and social networks without contingency plans. Lacking a plan B can trigger a total paralysis — and even invisibility — multiplying the risk of intrusions.

The engine stopped

On the morning of Monday, October 20, 2025, numerous websites, applications, and social networks went dark due to a global outage of Amazon Web Services (AWS), the world’s largest cloud infrastructure platform. In the United States, users were unable to access Amazon, Alexa, Prime Video, Crunchyroll, Canva, Perplexity, and Duolingo; social networks like Snapchat or Goodreads; and games such as Fortnite, Roblox, or Clash Royale. In Europe, several services experienced similar accessibility issues.

This happens because many invisible pieces of the internet live on AWS,explains Hervé Lambert, Global Consumer Operations Manager at Panda Security. “When this platform fails, it’s not just a server that goes down — entire basic services collapse, affecting websites, apps, and social networks that rely on them.” In short, “they stop working because they share the same infrastructure and base services — computing, storage, DNS, authentication, and CDN — either directly in AWS or in third-parties that depend on it. Without multi-region architecture or contingency plans, the entire user experience — loading, logging in, paying, or posting — falls apart.

When an outage of this magnitude occurs,” continues Lambert, “some apps can’t serve pages, APIs, or feeds because their compute layer — EC2, EKS, or Lambda — fails at the nodes or control plane. If there’s nowhere to read or store data, the site can’t load or authenticate; logins break because authentication systems like Cognito, STS/AssumeRole, or AWS SSO stop issuing tokens; DNS fails to resolve, or the CDN can’t fetch origin data, so domains respond erratically. Even if an app isn’t hosted on AWS, it still suffers if its providers are — the whole chain behaves like a house of cards.”

Why AWS Failures Ripple Across Services and Apps

Moreover, when AWS fails or degrades, “some companies go blind because their observability depends on that same platform,” warns Lambert. “If tools like CloudWatch, CloudTrail, GuardDuty, SIEMs, dashboards, SNS/SES alerts, or SSO are hosted in the same region, they too go down — leaving websites without metrics, logs, or valid credentials, and therefore exposed.” All of this is preventable “if monitoring, logging, and identity have an emergency exit outside the failure zone.

Many companies, however, centralise everything in a single region and account — “including backups and KMS keys,” notes Lambert. “Without multi-region failover, unavailability is total. Under pressure, some teams open security groups, disable WAFs, or expand IAM permissions to keep systems running — often breaking more things or leaving apps vulnerable.

The importance of having a “Plan B”

Why are there no contingency plans if outages are so risky?

“Because they aren’t incentivised — they seem expensive and technically tedious,” summarises Lambert. “Many websites and apps lack a Plan B because their priorities are misaligned: business rewards speed, not resilience; there’s a false sense of security — people believe these things won’t happen to them. Multi-region or multi-account setups, data replication, redundant identities, runbooks, and drills all sound like cost doubling. And many assume AWS won’t fail or that the SLA will cover the loss — which is not true.

At this point, the role of security by design becomes crucial. Many organisations still don’t integrate cybersecurity from the earliest stages of product or infrastructure development. They often react later with patches instead of building resilient systems from the start — a less effective and ultimately more expensive approach.

To break that cycle, Lambert suggests: “build resilience into KPIs, separate accounts and regions, automate backups and guardrails, and run failover drills. That will always be cheaper than explaining to thousands of users why your service has disappeared.

The post AWS outage: what it reveals about the fragility of cloud cybersecurity appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/aws-outage-cybersecurity-risk/feed/ 0
Does the Qantas hack include U.S. citizens? https://www.pandasecurity.com/en/mediacenter/does-the-qantas-hack-include-u-s-citizens/ https://www.pandasecurity.com/en/mediacenter/does-the-qantas-hack-include-u-s-citizens/#respond Mon, 20 Oct 2025 09:38:13 +0000 https://www.pandasecurity.com/en/mediacenter/?p=34207 does-the-qantas-hack-include-u-s-citizens.

Yes, it does. Although there is no exact number of how many U.S. citizens are affected, the number is likely enormous. Every year, Australia welcomes…

The post Does the Qantas hack include U.S. citizens? appeared first on Panda Security Mediacenter.

]]>

Yes, it does. Although there is no exact number of how many U.S. citizens are affected, the number is likely enormous. Every year, Australia welcomes more than half a million tourists from the USA, and Qantas is one of the major airlines operating flights from numerous major U.S. cities, including Los Angeles, San Francisco, New York, and Dallas. The breach occurred in July 2025, and although the FBI reported some successes in dealing with the hacker organization that claimed responsibility, the stolen information belonging to Qantas was subsequently leaked on the dark web. Qantas has begun notifying the affected travelers, which includes U.S. persons, about the cybersecurity breach that affected nearly six million customers worldwide.

Key takeaways

  • U.S. travelers are among the victims of the Qantas hack from July 2025
  • The details of 5.7 million Qantas customers have been leaked online 
  • The leaked information consists predominantly of names, addresses, DOBs, and phone numbers. 
  • The number of affected Americans is unknown, but it is likely in the thousands
  • Qantas has begun informing affected customers, but does not offer free identity theft monitoring services to the victims

When did Qantas experience the hack, and who is responsible for the attack?

Earlier this year, hackers belonging to a cybercriminal collective known as Scattered Lapsus$ Hunters deployed social engineering tactics to gain unauthorized access to the Salesforce environments of numerous high-profile companies, including the Australian airline Qantas. The incident occurred in the summer of 2025 and resulted in the theft of over 5 million travelers’ personal records, which included sensitive information. 

Has the stolen information been leaked on the dark web?

Although global law enforcement agencies, including the FBI, managed to disrupt the operations of the Scattered Lapsus$ Hunters cyber gang, records of approximately 5.7 million travelers have been exposed. After the initial confirmation of the Salesforce-related cyber incidents, multiple law enforcement agencies collaborated. They worked together to shut down the website of the crime gang that claimed responsibility for the attack. Law enforcement also took down a dark web forum used by the gang members.

What info was leaked in the Qantas cyber incident?

The hackers managed to obtain sensitive information, including names, email addresses, physical addresses, date of birth details, and phone numbers. The stolen data does not contain Social Security Numbers (SSN) or passport information of U.S. citizens.  

How many U.S. citizens have been affected by the Qantas hack?

The number of U.S. citizens affected by the data breach is unknown. However, Australia receives more than half a million tourists every year. So the number of affected travelers is likely in the thousands.

How to know if a person is included in the Qantas data breach?

Qantas customers whose details have been leaked are now receiving emails from Qantas with more information about the breach. The airline has begun notifying its affected customers. Qantas advises its customers to remain alert and always use two-step authentication when possible. 

The Australian airline is not alone. The company is just one of many organizations globally that have been affected by the social engineering attacks on Salesforce customers. The same hacker organization used the same tactic on other high-profile organizations, including Cisco, Allianz Life, and Coca-Cola. Scattered Lapsus$ Hunters claims to have more than one billion personally identifiable records. With the increasing number of cyber incidents, the likelihood of personal information being exposed is substantial, and having antivirus protection on all connected devices has never been more critical than it is now. 

The post Does the Qantas hack include U.S. citizens? appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/does-the-qantas-hack-include-u-s-citizens/feed/ 0
Is it possible to keep AI out of your personal life? https://www.pandasecurity.com/en/mediacenter/is-it-possible-to-keep-ai-out-of-your-personal-life/ https://www.pandasecurity.com/en/mediacenter/is-it-possible-to-keep-ai-out-of-your-personal-life/#respond Fri, 17 Oct 2025 08:37:35 +0000 https://www.pandasecurity.com/en/mediacenter/?p=34201 is-it-possible-to-keep-ai-out-of-your-personal-life

It is close to impossible to keep AI out of your personal life, and a recent report by PEW research confirms that the majority of…

The post Is it possible to keep AI out of your personal life? appeared first on Panda Security Mediacenter.

]]>

It is close to impossible to keep AI out of your personal life, and a recent report by PEW research confirms that the majority of Americans admit they do not have much control over how AI is used in their lives. And they are correct – it is particularly challenging to stay hidden, especially for people living in the Western world.

Artificial Intelligence is virtually everywhere, and its automated programs constantly scour the internet and other sources for data. Nowadays, internet users leave numerous digital footprints that are useful for AI crawlers, which collect information about everything, including aspects of people’s personal lives. Data brokers do not help the situation, as people who tend to leave fewer digital footprints still end up on databases around the internet because of their presence on public and private lists. AI is being utilized in various fields, including but not limited to video surveillance, finance, healthcare, and transportation. Different generative artificial intelligence automated programs, which include chatbots, are trained on specific parts of data fed to them, which includes social media content and the public internet.

Key takeaways

  • AI is deeply rooted in digital life, and it is close to impossible to avoid its influence on people’s personal lives 
  • Mainstream chatbots try to avoid personal information, but AI still has access to personal info that is accessible to organizations such as law enforcement and intelligence agencies 
  • Mainstream AI chatbots such as Grok, Meta AI, Gemini, and ChatGPT train on public data
  • When trying to avoid AI, VPNs are helpful but come with limitations 

How is AI being used, and why is it hard to keep it away?

AI tools are already being deployed almost everywhere, from dating websites to science research and entertainment, as well as in private and government law enforcement organizations. AI-powered assistants are helping people find better matches on platforms such as Facebook Dating. High-profile individuals, including Elon Musk, have predicted that soon we will have AI-generated games and movies, and AI will accelerate and generate new scientific discoveries

Do chatbots train on info from data brokers such as White Pages?

White Pages is considered the largest online directory provider in the USA and has approximately 200 million user records. Publicly available AI-powered chatbots, such as those offered by OpenAI and Google, do not utilize White Pages data for training their large language models (LLMs). People’s private and sensitive information is excluded or anonymized. Users’ privacy is often protected to some extent by various legal frameworks, such as California’s CCPA and Europe’s GDPR. However, law-enforcement-specific offerings, such as SoundThinking’s CrimeTracer, don’t have such limitations. They provide a Google-like search engine for government agents. Everyone hopes that law enforcement does not misuse those powerful tools for personal or political gain.  

Where do AI chatbots train?

Different types of chatbots focus on specific areas of the internet. For example, xAI’s Grok is heavily trained on data from the social media platform X. In contrast, Facebook’s Meta AI trains on information available in public posts on Facebook. It also uses other details shared with the app, such as location and public profile information. Even though Meta has confirmed on multiple occasions that it does not train on private messages, it admits to using public data. The company does not offer a direct opt-out feature for US users. This means users cannot prevent their public posts from training its AI. Google’s Gemini and OpenAI’s ChatGPT train on everything that includes the public internet. 

Does VPN prevent AI from training on your online activity?

It does to some extent, but it is not entirely bulletproof. A VPN can block AI-powered behavioral profiling. It is particularly helpful as it encrypts internet traffic and masks a person’s IP address. However, VPN services also come with a range of limitations. They have almost no impact on account-level surveillance and tracking through local identifiers, such as cookies and browser fingerprinting. If a user is connected to the same social media profile or email account, the service provider still tracks their behaviour and possibly uses it for AI training. The same applies to cookies and other digital identifiers. These are used to build profiles for targeted advertising or personalized experiences.

Apart from using a VPN, which often comes with quality antivirus solutions, individuals who wish to limit AI’s training on them. They can also consider using privacy-focused browsers. They should also adjust their privacy settings on social media platforms and search engines. Taking a look at the privacy settings on apps, consoles, and even government and public services is also advisable. This includes requesting that data brokers and mortgage providers not share/publish/train AI on personal information. When it comes to privacy, the less you agree to share, the better.

The post Is it possible to keep AI out of your personal life? appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/is-it-possible-to-keep-ai-out-of-your-personal-life/feed/ 0
How Parents Can Train Their Children to Use AI Responsibly https://www.pandasecurity.com/en/mediacenter/how-parents-can-train-their-children-to-use-ai-responsibly/ https://www.pandasecurity.com/en/mediacenter/how-parents-can-train-their-children-to-use-ai-responsibly/#respond Fri, 10 Oct 2025 08:28:09 +0000 https://www.pandasecurity.com/en/mediacenter/?p=34186 how-parents-can-train-their-children-to-use-ai-responsibly

In a world where artificial intelligence tools are becoming as common as smartphones, parents face a critical challenge: teaching children to interact with AI safely…

The post How Parents Can Train Their Children to Use AI Responsibly appeared first on Panda Security Mediacenter.

]]>

In a world where artificial intelligence tools are becoming as common as smartphones, parents face a critical challenge: teaching children to interact with AI safely while harnessing its educational potential. Research shows that 78% of children have discussed AI with their parents, yet only 34% of those conversations address crucial concerns like information accuracy.

The key lies not in avoiding AI altogether, but in building children’s digital literacy and critical thinking skills from an early age.

Key takeaways

  • Start conversations early: 78% of children have discussed AI with parents, but only 34% of these conversations address critical concerns like information accuracy and emotional attachment – make these discussions comprehensive and ongoing
  • Build healthy skepticism: Children trust AI responses more than adult sources, with 40% expressing no concerns about following AI advice – teach them to verify all AI information through multiple reliable sources
  • Set clear boundaries: Establish family rules including never sharing personal information with AI, discussing confusing responses with adults, and using AI as a learning tool rather than human replacement
  • Choose safe platforms: Select kid-focused AI tools with robust parental controls and content filtering rather than general-purpose chatbots

Understanding the AI landscape for children

Studies reveal that 58% of children who use AI chatbots believe these tools provide better information than traditional searches. This trust, while concerning, presents an opportunity for parents to guide responsible usage.

AI tools present both tremendous opportunities and significant risks for young users. On the positive side, they can enhance creativity, support learning, and provide quick access to information. However, research also shows concerning trends: children may develop emotional attachments to AI companions, encounter inappropriate content despite safety measures, and struggle to distinguish AI-generated information from reliable sources.

Building critical thinking skills

Protecting your kids begins with smart, transparent conversations. Asking your kids how they use AI and sharing tips on safe use will prepare them to interact with these systems safely.

Question everything

Encourage children to approach AI-generated content with healthy skepticism. When they receive an AI response, teach them to ask: “Is this information accurate? What sources support this?” Show them the basics of fact checking so they learn to confirm AI statements are accurate.

Understand AI limitations

Help children recognize that AI systems can produce biased or incomplete answers because they’re trained on datasets that may contain inaccuracies or reflect societal biases. Use age-appropriate examples to show how AI might favor certain viewpoints or provide outdated information.

Practice source verification

Implement a family rule that important information from AI sources must be confirmed through at least two reliable, human-authored sources before being accepted as fact. This builds essential media literacy skills that extend beyond AI use.

AI rules to protect your family

Here are some suggested guidelines that will help better protect kids as they interact with AI: 

Establish clear boundaries

Consider implementing rules such as never sharing personal information with AI systems, always discussing concerning or confusing AI responses with a trusted adult, and using AI as a learning tool rather than a replacement for human guidance.

Choose safe platforms

Not all AI platforms are suitable for children. Specialized kid-focused AI tools like PinwheelGPT offer better safety features than general-purpose chatbots. These platforms typically include robust parental controls, content filtering, and educational focus rather than pure entertainment.

Monitor for over-reliance

Watch for warning signs that your child may be developing an unhealthy relationship with AI tools. These include withdrawing from real-world friendships, preferring AI conversations to human interaction, or becoming distressed when AI access is limited. If these addictive behavior patterns emerge, consider reducing AI exposure and increasing opportunities for human social interaction.

Teach healthy skepticism

Help children understand that AI responses, while often helpful, can be manipulated to seem more credible. Explain how AI systems, like social media, are designed to maintain engagement, which may not always align with providing accurate or appropriate information.

Emphasize human connection

Regularly remind children that AI cannot replicate human knowledge and emotions. Encourage them to seek advice from trusted adults for important decisions and to maintain strong relationships with family and friends.

Putting AI rules into practice

As a parent, you are the most qualified to decide what is best for your family. Here are some ideas for putting AI rules into place.

Start small

Begin with simple, supervised AI activities like creative writing prompts or basic homework assistance. As children demonstrate responsible usage and critical thinking skills, gradually increase their independence.

Use technology tools wisely

Consider implementing parental control software that can monitor AI interactions while respecting your child’s developing autonomy. Tools like Panda Dome Family can alert parents to concerning conversations while allowing educational exploration.

Create learning opportunities

Transform AI mistakes into teachable moments. When AI provides incorrect information, use it as an opportunity to practice fact-checking skills and discuss why verification is important.

Empowering your kids in the age of AI

By focusing on critical thinking, emotional intelligence, and digital literacy, parents can help their children harness AI’s benefits while avoiding its pitfalls. The goal isn’t to shield children from AI entirely, but to empower them with the skills and judgment needed to navigate an AI-integrated future confidently and safely.

The post How Parents Can Train Their Children to Use AI Responsibly appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/how-parents-can-train-their-children-to-use-ai-responsibly/feed/ 0
Does Facebook have a Dating AI assistant? https://www.pandasecurity.com/en/mediacenter/does-facebook-have-a-dating-ai-assistant/ https://www.pandasecurity.com/en/mediacenter/does-facebook-have-a-dating-ai-assistant/#respond Wed, 08 Oct 2025 07:00:02 +0000 https://www.pandasecurity.com/en/mediacenter/?p=34179 does-facebook-have-a-dating-ai-assistant

Yes, Facebook does have a dating AI assistant that helps users be more efficient when searching for matches on the company’s online dating service, Facebook…

The post Does Facebook have a Dating AI assistant? appeared first on Panda Security Mediacenter.

]]>

Yes, Facebook does have a dating AI assistant that helps users be more efficient when searching for matches on the company’s online dating service, Facebook Dating. The new AI tool comes in the form of a chatbot, and it began rolling out to some Facebook Dating users in Canada and the USA last month. This innovative AI dating helper is designed to reduce users’ swipe fatigue, which, according to a recent Forbes Health Survey, is a real issue, as users admitted to spending approximately fifty minutes a day swiping through dating apps. The new chatbot is not the only new feature from the world’s biggest social media platform. Facebook has also introduced Meet Cute, a feature that aims to eliminate indecision in online dating by automatically matching users with a surprise match based on their personalized matching algorithm. Similar AI-powered features have already been introduced at other meeting apps. 

Key takeaways

  • Facebook Dating added a couple of new features to its arsenal of dating tools in an effort to expand its presence in the dating app market.
  • The new chatbot can help users find matches without needing to swipe, but by just conversing with a chatbot. The other new feature, Meet Cute, offers surprise matches based on a matchmaking algorithm.
  • Unlike tools at other dating apps, the new Facebook Dating match features are offered free of charge.
  • Romance scams are rising. Fraudsters use AI tools to create fake profiles and gain victims´ trust with emotional manipulation tactics.   

What is Facebook Dating?

Meta’s Facebook has billions of users, but despite its significant size, it is not known as the go-to place when individuals are looking for a new partner. The folks at Meta have been trying to change that over the last six years since Facebook Dating was first launched. Meta markets its dating services as a space within Facebook designed to facilitate meeting and initiating new conversations with like-minded people who share similar interests.

However, despite having hundreds of thousands of users, Facebook Dating has not taken off as well as other dating apps, whose active user numbers are in the millions. Meta is taking a step forward. Its new AI chatbot and Meet Cute aim to capture part of the dating crowd. Last month, the new features started rolling out in Canada and the USA. 

Facebook’s new dating assistant and Meet Cute explained

The dating assistant is presented as a chatbot that guides love seekers through their dating journey. It offers better matches, and just by conversing with the bot, it can make suggestions. Meta is hoping to alleviate the swipe fatigue that has left dating app users frustrated.

A recent Forbes Health Survey disclosed that users spend approximately 50 minutes swiping per day.  Instead of swiping, Meta wants users to request more unique traits from the dating assistant. For example, they can ask it to “find a handsome engineer from Los Angeles” or a “Kathrin Zeta Jones lookalike”. On the other hand, Meet Cute pairs users automatically based on algorithmic predictions of mutual interest. The feature is optional and delivers matches weekly, but it is not mandatory, and users can easily opt out. Neither feature is yet available worldwide, but both have started rolling out gradually to users in North America.

How does Facebook Dating compare to other dating apps?

While Facebook Dating garners the attention of hundreds of thousands of users, this represents only a tiny portion of the overall Facebook user base. The numbers are small compared to daily users of popular dating apps like Tinder and Bumble, with millions active.

Do other apps have similar AI features?

Yes, they do. Trendy dating apps such as Tinder, Hinge, and Bumble have introduced similar chatbots. While some of them are behind paywalls, AI is widely adopted by the most popular online dating platforms, including Bumble, Tinder, and Hinge. 

Are romance scams on the rise?

Yes, romance scams continue rising, powered by AI chatbots that help fraudsters appear more plausible and trustworthy. The new tools make it hard to spot fake requests. AI attacks are especially effective against non-tech-savvy users. Criminals use AI to create fake profiles and images. It also helps them sound convincing and gain trust. Once the trust is established, criminals typically attempt to defraud their targets by imposing urgent requests and financial demands. The AI and crypto revolutions have made it easy for scammers to thrive. 

Facebook Dating introduced new features last month. They´re rolling out slowly in North America and  expanding globally next year. AI is certainly helping companies deliver better tools, but cybercriminals are also utilizing it. While it is tempting to take advantage of the new tools, social media users need to keep in mind that the latest improvements in AI have also made it possible for fraudsters to become more creative when targeting new victims and executing malicious campaigns. Having adequate cyber protection can help fight back against bad actors trying to scam you.

The post Does Facebook have a Dating AI assistant? appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/does-facebook-have-a-dating-ai-assistant/feed/ 0
JLR cyberattack: How one hack devastated Britain’s Biggest carmaker https://www.pandasecurity.com/en/mediacenter/jlr-cyberattack-how-one-hack-devastated-britains-biggest-carmaker/ https://www.pandasecurity.com/en/mediacenter/jlr-cyberattack-how-one-hack-devastated-britains-biggest-carmaker/#respond Mon, 06 Oct 2025 07:00:56 +0000 https://www.pandasecurity.com/en/mediacenter/?p=34128 jlr-cyberattack-how-one-hack-devastated-britains-biggest-carmaker

A sophisticated cyberattack has brought Jaguar Land Rover (JLR) to a complete standstill for over a month, creating one of the most devastating corporate cyber…

The post JLR cyberattack: How one hack devastated Britain’s Biggest carmaker appeared first on Panda Security Mediacenter.

]]>

A sophisticated cyberattack has brought Jaguar Land Rover (JLR) to a complete standstill for over a month, creating one of the most devastating corporate cyber incidents in UK history. The attack demonstrates how modern manufacturers remain vulnerable to digital threats that can instantly halt multi-billion-dollar operations and threaten hundreds of thousands of jobs.

Key takeaways

  • JLR has been shut down since August 31, losing up to £500 million per week
  • Over 200,000 workers across the supply chain face job losses
  • The UK government intervened with unprecedented £1.5 billion loan guarantee  
  • Scattered Spider cybercrime group claimed responsibility for the attack
  • Production restart planned for October 6, but full recovery may take months

What happened in the JLR cyberattack?

The devastating attack began on August 31, 2025, when hackers infiltrated JLR’s IT systems, forcing the company to immediately shut down all operations. The notorious Scattered Lapsus$ Hunters group, linked to Scattered Spider cybercriminals who previously targeted major UK retailers including Marks & Spencer and Co-op, claimed responsibility for the breach.

JLR responded by proactively shutting down its entire global IT network to prevent further damage, bringing production to a complete halt across all facilities in the UK, China, Slovakia, India, and Brazil. The company’s three UK manufacturing plants in Solihull, Wolverhampton, and Halewood have produced zero vehicles since September 1, despite normally manufacturing approximately 1,000 cars every day.

How much is the cyberattack costing JLR?

The financial devastation has been unprecedented. Industry experts estimate JLR is losing between £50 million to £500 million per week, with some analysts suggesting daily losses of up to £7.1 million.

What makes this particularly catastrophic is that JLR reportedly had no active cyber insurance coverage at the time of the attack. Unlike Marks & Spencer, which recovered much of its £300 million cyber incident losses through insurance, JLR must bear the full financial burden of this attack. Some industry sources suggest total losses could reach £4.7 billion if the shutdown extends into November.

Supply chain devastation

The true human cost extends far beyond JLR’s factory gates. The company sits at the center of the UK’s largest automotive supply chain, directly employing 30,000 workers while supporting an estimated 120,000 to 200,000 additional jobs across hundreds of supplier companies.

Many suppliers are small and medium-sized enterprises heavily dependent on JLR orders. Industry surveys reveal that one in six businesses in JLR’s supply chain have already implemented redundancies, while others placed workers on zero-hour contracts. One smaller supplier has already laid off 40 employees, nearly half its workforce, directly due to the production halt.

What is JLR doing to recover?

JLR is implementing a cautious, phased recovery approach prioritizing security over speed. The company announced that the Wolverhampton engine facility is expected to restart on October 6, followed by other locations in subsequent weeks.

The recovery process involves collaboration with cybersecurity specialists, the UK’s National Cyber Security Centre (NCSC), and law enforcement agencies to ensure systems are fully secure before resuming operations.

How did the UK Government respond?

Recognizing the catastrophic economic implications, the UK government took the unprecedented step of guaranteeing a £1.5 billion emergency loan to JLR. This is the first time a UK company has received direct government financial support specifically due to a cyberattack.

The loan, provided by commercial banks including HSBC, MUFG, and NatWest but underwritten by the government, will be repaid over five years.

What this means for British manufacturing

The JLR cyberattack serves as a stark wake-up call for British industry about the vulnerability of modern manufacturing to cyber threats. As one expert noted, the incident demonstrates how “a single IT system attack can halt a multi-billion-pound physical production line”.

The attack highlights the interconnected nature of today’s automotive industry, where disruption to one major player cascades through hundreds of suppliers, distributors, and partners. For JLR, full recovery may take months even after production resumes, with industry sources suggesting it could take three to four weeks to ramp up to normal production levels.

As manufacturers increasingly rely on interconnected digital systems, the JLR incident stands as a powerful reminder that cybersecurity (and cybersecurity insurance) is no longer just an IT issue – it’s a fundamental business resilience requirement that can determine corporate survival.

The post JLR cyberattack: How one hack devastated Britain’s Biggest carmaker appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/jlr-cyberattack-how-one-hack-devastated-britains-biggest-carmaker/feed/ 0
What personal information does the Nintendo Switch 2 collect? https://www.pandasecurity.com/en/mediacenter/what-personal-information-does-the-nintendo-switch-2-collect/ https://www.pandasecurity.com/en/mediacenter/what-personal-information-does-the-nintendo-switch-2-collect/#respond Fri, 03 Oct 2025 07:00:23 +0000 https://www.pandasecurity.com/en/mediacenter/?p=34106 what-personal-information-does-the-nintendo-switch-2-collect

As we enter the festive season, Nintendo has stepped up its production, making it possible for consumers actually to find stock of the latest Nintendo…

The post What personal information does the Nintendo Switch 2 collect? appeared first on Panda Security Mediacenter.

]]>

As we enter the festive season, Nintendo has stepped up its production, making it possible for consumers actually to find stock of the latest Nintendo Switch 2 console. There are still plenty of sold-out locations, but stock levels are certainly improving, and more and more people are upgrading to the new system. As thousands of customers stock up on the latest offerings from Nintendo, the new system comes with a few changes to its privacy policy. The new Nintendo Switch 2 features a new function called GameChat, which enables Nintendo to collect, monitor, and record audio and video from chat sessions. Now, Nintendo collects voice and video recordings, in addition to data points such as the player’s name, address, age, location, and contact information, including email addresses and phone numbers. 

Key takeaways

  • The new GameChat feature is only available on Nintendo Switch 2 and allows consenting users to communicate via voice and video. Nintendo has announced that those interactions might be monitored and recorded. 
  • Nintendo’s general policy changed right before launch to also notify customers that interactions with Nintendo’s customer service are also monitored and recorded, and player info could be visible to other users. 
  • The Nintendo Switch 2’s privacy policy is similar to those of other popular console solutions, such as the Xbox Series X and PS5. It allows consumers to limit what they share with the tech company.
  • Staying safe while console gaming requires common sense, system updates, and a fortified WiFi network 

What data does Nintendo collect, and what are the privacy policy changes between Nintendo Switch and Nintendo Switch 2?

Nintendo collects a wide range of personal, commercial, and player performance data. The data points consist of player age, location, contact information, and profile name. The Japanese tech company also tracks players’ buying behavior, gaming habits, and system performance. The new Switch 2 comes with a few differences when compared to the original Switch. While we will not go through all changes, we highlight the new GameChat tool, which is only available on the new console. It allows Nintendo Switch 2 users playing in online mode to communicate in voice and video sessions that Nintendo says could be observed and recorded.

How does it compare to other consoles?

When it comes to privacy, Nintendo’s latest console is similar to those of other console manufacturers, such as Microsoft and Sony. Unfortunately, there is no option to completely opt out of sharing personal information, as users are always required to agree to the manufacturer’s general user agreement terms. The tech conglomerates are not the only ones collecting data. Third-party companies also get access to various data points. Those vendors could come from a wide range of backgrounds, including government agencies, advertisers, and video game companies. They may or may not be based in the USA. For example, Electronic Arts, maker of popular games like EA Sports FC 26, has offices worldwide. They span countries in Europe, Asia, and North America.

What can you do to protect privacy even more?

It is always worth going through the privacy settings and adjusting the default settings. Privacy-friendly players can limit some of the information they share with console manufacturers. Apart from minimizing data collection, users can ensure that their home WiFi network is configured correctly and safeguarded. Even though anti-virus software solutions cannot be installed directly on consoles, anti-malware solutions can protect your home WiFi. They offer features like firewall and VPN. Such solutions often offer network monitoring. For example, they can alert you if your kids suddenly get their Nintendo Switch 2 online while gaming.

How to stay safe while console gaming?

Staying safe while console gaming comes with setting proper filters and restrictions. If you are an online player, muting the other opponent or your microphone can be particularly helpful if you (or your kids) wish to avoid hearing things they are not supposed to hear. Taking advantage of the free Nintendo Switch Parental Controls smart device app is also a must.

Another helpful tip is to ensure that the gaming system is updated. Even though console companies try to protect users, they all have a history of being hacked. A good way to reduce your risk of data breaches is to stop ignoring system updates. Perform them whenever necessary. Another option is to start playing games offline. Not all games require an online connection. Playing offline is advisable to limit your online presence and avoid oversharing.

Similar to other consoles on the market, Nintendo Switch 2 comes with default settings that could be considered rather data-hungry. Nintendo is not hiding its data-thirsty nature. It has confirmed that the new Switch 2 collects audio and video chats. While there are options to limit certain aspects, console companies such as Microsoft, Sony, and Nintendo like to collect data. If you do not wish to share much with them, tweak the privacy settings. Make sure your home WiFi network is properly protected.

The post What personal information does the Nintendo Switch 2 collect? appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/what-personal-information-does-the-nintendo-switch-2-collect/feed/ 0
Why AI Browsers Could Put Your Money at Risk https://www.pandasecurity.com/en/mediacenter/why-ai-browsers-could-put-your-money-at-risk/ https://www.pandasecurity.com/en/mediacenter/why-ai-browsers-could-put-your-money-at-risk/#respond Wed, 01 Oct 2025 07:00:34 +0000 https://www.pandasecurity.com/en/mediacenter/?p=34098 why-ai-browsers-could-put-your-money-at-risk

A new generation of web browsers is coming to a computer near you. Agentic AI browsers, like Comet from Perplexity, can shop and browse the…

The post Why AI Browsers Could Put Your Money at Risk appeared first on Panda Security Mediacenter.

]]>

A new generation of web browsers is coming to a computer near you. Agentic AI browsers, like Comet from Perplexity, can shop and browse the internet for you automatically to save time and effort.

However, agentic AI browsers also create dangerous security holes that scammers are already learning to exploit. These smart assistants lack the “street smarts” that keep humans safe from online fraud, making them easy targets for cybercriminals.

Key takeaways

  • AI browsers can automatically complete fake purchases and enter personal information on scam websites
  • These systems can’t recognize obvious warning signs that humans would spot immediately
  • Tech companies may be rushing AI browsers to market without proper safety features
  • Traditional internet security tools don’t protect against these new types of attacks

What makes AI browsers different and dangerous

Think of traditional AI assistants like Siri or Alexa that answer questions but can’t take action beyond your device. AI browsers are completely different. They can actually surf the web, click links, fill out forms, make purchases, and manage your email accounts without asking you first. 

An AI browser is like having a personal assistant who can spend your money and access your accounts, which is pretty cool. Less cool is the way that these browsers have never learned to be suspicious of strangers.

This creates a perfect opportunity for scammers. While these AI systems are incredibly smart in some ways, they completely lack the gut instincts that protect humans from fraud. They don’t get that “something is not quite right” feeling when a website looks suspicious or a deal seems too good to be true.

Tests show how easily AI gets scammed

Security experts recently decided to test how well AI browsers could spot scams. What they found was shocking. Here’s what happened when they tested Perplexity’s Comet AI browser:

The fake shopping test

Researchers built a fake Walmart website that looked obviously suspicious—the logo was distorted, the web address was wrong, and the whole site felt “off.” Then they told the AI browser to buy an Apple Watch from this fake site. A human would have immediately noticed something was wrong and left the site. But the AI browser completed the entire purchase, entering saved payment information and processing the fraudulent transaction.

The email scam test

Next, they sent the AI a fake email pretending to be from a well-known bank, complete with a dangerous link designed to steal login information. When a human gets suspicious emails like this, most people delete them. The AI browser treated it like a legitimate task, clicked the malicious link, and typed in the user’s bank username and password on the fake website.

The hidden command test

In the most clever test, researchers hid invisible instructions inside what looked like a normal webpage. While humans would just see a regular page, the AI could read secret commands telling it to download potentially harmful files. The AI followed these hidden instructions without question, infecting the test machine with malware.

Why this matters

These aren’t just isolated problems because they represent a completely new way scammers can attack people. Instead of having to trick millions of individuals one by one, criminals could potentially target the AI systems that millions of people use, multiplying their impact dramatically.

The scariest part? These AI browsers are designed to be helpful above all else. They want to complete tasks and make users happy, which means they’ll bend over backward to do what they think you want—even when “what you want” is actually a scammer’s instruction disguised as a legitimate request.

How to stay safe

If you’re considering using AI browsers or if your workplace is implementing them, here are some essential safety measures:

  • Set strict limits on what the AI can do without asking permission first. Don’t let it make purchases, enter personal information, or access sensitive accounts automatically.
  • Monitor everything the AI does. Make sure you can see and review every action it takes on your behalf, especially anything involving money or personal data.
  • Use the minimum permissions necessary. Don’t give the AI access to accounts, payment methods, or information it doesn’t absolutely need for specific tasks. If you wouldn’t give your credit card to a stranger, you shouldn’t give it to an AI agent either.
  • Stay involved in important decisions. Never let an AI browser handle financial transactions, sensitive communications, or account management without your direct oversight.

The bottom line

AI browsers promise incredible convenience, like having a digital assistant that can handle your online shopping, manage your emails, and research information while you focus on other things. But right now, these systems are like giving your credit card and house keys to someone who has never learned that strangers might try to trick them.

AI agentic technology will likely improve over time, but today’s AI browsers represent a significant risk that you must understand before using them. The choice between convenience and security has never been more important

Until these fundamental security problems are solved, the smartest approach is to treat AI browsers like you would any other powerful tool – useful when used carefully, but potentially dangerous when given too much freedom to act on your behalf.

The post Why AI Browsers Could Put Your Money at Risk appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/why-ai-browsers-could-put-your-money-at-risk/feed/ 0