Social Engineering: How to Stop Cybercriminals from Tricking Your Team

Social engineering is the act of manipulating people into revealing confidential information, approving fraudulent requests, or granting system access — without breaching any technology. In 2026, AI has made social engineering attacks faster, harder to detect, and far more convincing than anything NZ businesses were trained to recognise.

 

A hacker does not need to break into your systems if they can talk their way in. And in 2026, they do not even need to talk — they can clone a voice from three seconds of audio and make a call that sounds exactly like your CEO.

Social engineering has always exploited the human element of security. What has changed is the sophistication of the tools available. Deepfake-as-a-Service platforms now allow attackers with no technical expertise to deploy hyper-realistic voice and video impersonation attacks. Human detection accuracy for high-quality deepfakes has dropped to as low as 24.5% in research settings. Your team cannot be expected to identify these attacks by ear or eye alone.

This guide covers what these attacks look like in 2026, why AI has fundamentally changed the threat, and the defences that work. For context on how this threat connects to broader incidents, our guide on how cyber attacks unfold explains how attackers move from initial access to full system compromise.

What Is Social Engineering and Why Does It Work?

Social engineering works because it bypasses technology entirely. No firewall blocks a convincing phone call. No antivirus catches a message that contains no malware. Social engineering attacks succeed because they exploit the way human beings make decisions — through trust, authority, urgency, and familiarity.

These psychological triggers have not changed. What AI has done is allow attackers to replicate the specific voice, face, and communication style of a known person to activate those triggers with precision that was previously impossible.

The psychological triggers attackers exploit

  • Authority — a request appears to come from a manager, executive, or IT administrator
  • Urgency — the request must be acted on immediately, leaving no time to verify
  • Familiarity — the attacker sounds or looks like someone the target already knows and trusts
  • Fear — non-compliance is presented as having serious consequences
  • Reciprocity — the attacker does something helpful before making the key request

 

How AI Has Transformed Social Engineering Attacks in 2026

Social engineering attacks have always been dangerous. In 2026 they have become industrialised.

2026 global research: vishing attacks using deepfake voices rose 170% in a single quarter of 2025. AI impersonation scams grew 148% across calls, video, and messaging. Deepfake fraud attempts have increased 2,137% over the last three years. Just three seconds of audio is enough to clone a voice with 85% accuracy.

 

Deepfake-as-a-Service platforms became widely available in 2025, making voice and video cloning accessible to criminals with no technical background. An attacker can now purchase a ready-to-use attack kit, scrape audio from a LinkedIn video or public presentation, and generate a convincing voice clone of your CEO in minutes.

AI voice cloning and vishing

Attackers collect audio from publicly available sources — LinkedIn videos, webinar recordings, company event footage — and generate a voice clone used to make calls impersonating executives, IT staff, or suppliers. Finance teams are the primary target. The call instructs them to approve a payment, change bank account details, or provide credentials before end of day. Seventy-seven percent of people targeted by a voice clone who confirmed financial loss did lose money.

Deepfake video attacks

Video deepfakes now maintain temporal consistency in live calls. Attackers join video meetings as a deepfaked version of an executive and request financial authorisation or credential access in real time. In a widely reported case, a finance employee authorised a transfer of over US$25 million after a video call in which every other participant was a deepfake. This attack vector is operational, not theoretical.

AI-personalised phishing and pretexting

AI analyses your company’s public communication and generates messages that precisely match your organisation’s tone, reference real colleagues by name, and cite real internal projects. These messages are functionally indistinguishable from legitimate internal communication. Our phishing scams guide covers this attack vector in detail.

 

Common Social Engineering Attack Types in 2026

The following table covers the attacks most commonly targeting NZ businesses right now.

 

Attack Type What It Looks Like in 2026
Phishing AI-generated emails that precisely match internal communication style. Social engineering click-through rates now exceed 54% for AI-crafted messages.
Vishing Voice calls using AI-cloned audio of known executives or suppliers. Requires only 3 seconds of audio. Primary vector for fraudulent payment authorisation.
Smishing SMS phishing combined with deepfake voice follow-up calls. Rapidly growing as email filtering improves.
Deepfake Video Live or recorded video impersonating executives. Used to authorise transfers or deliver false instructions from apparent leadership.
Pretexting Fabricated scenario — fake IT audit, supplier verification, security check — used to extract information across multiple interactions.
Baiting Offering something appealing to lure recipients into clicking links or providing credentials. Common tactic deployed at volume.
Quid Pro Quo Attacker offers IT assistance in exchange for credentials. Common via phone: ‘I am from IT, I can fix that if you give me your login.’
Tailgating Physical attack — following an authorised person through a secure door. Relevant for businesses with server rooms or secure facilities.

 

Warning Signs of a Social Engineering Attack

While AI has removed many traditional tells, some indicators remain reliable. Train your team to pause and verify when any of the following appear.

  • Unexpected urgency — any request that must be acted on immediately without time to verify
  • Requests for payment, credentials, or access changes delivered by phone or video call alone
  • A known contact requesting something outside their normal behaviour or responsibilities
  • Pressure to bypass normal approval processes or keep the request confidential
  • Any payment instruction received by phone or video, even from a familiar voice or face
  • Requests arriving outside business hours or from unfamiliar numbers claiming to be known contacts

 

The single most important habit to build is this: always verify through a second, independent channel before acting on any request involving payment, credentials, or access. Not a reply to the same message. A call to a known number already on file.

How to Protect Your NZ Business Against Social Engineering

Technical controls alone are not sufficient. Protection requires verification processes, updated training, and layered technology working together.

Establish a verification protocol for high-risk requests

Any high-risk request — payment authorisation, bank account changes, credential sharing, or access grants — must require confirmation through an independent channel regardless of how convincing the original request appears. Document the protocol, communicate it to all staff, and make clear that following it will never result in disciplinary action even when it inconveniences senior staff.

Update security awareness training for AI social engineering

Training that teaches staff to spot bad grammar no longer addresses the actual threat. Training in 2026 must cover AI voice cloning, deepfake video attacks, and pretexting by familiar-seeming contacts. Our employee security awareness guide outlines what effective training looks like and why frequency matters as much as content.

Run social engineering simulations

The most effective way to build genuine resilience is to test your team against realistic attack scenarios before real attackers do. Simulations should include vishing calls, pretexting scenarios, and phishing emails — not just the obvious attempts but the convincing ones.

Limit public exposure of executive audio and video

Attackers harvest voice samples from LinkedIn videos, webinar recordings, and public presentations. Being deliberate about what audio and video is publicly available for key staff reduces the quality of voice clones that can be generated.

Apply zero trust and insider awareness principles

Zero trust assumes no request should be trusted on a single factor alone. Every high-risk action requires independent verification. Our insider threats guide covers how to build security awareness without creating a culture of suspicion.

 

Is Your Team Prepared for 2026 Social Engineering Attacks?

Exodesk delivers security awareness training, social engineering simulations, and IT security assessments to South Island businesses from our offices in Christchurch and Dunedin. If your team’s training was built for 2022 rather than the AI-powered attacks of 2026, it is not providing the protection you think it is.

We offer a no-obligation review of your current security awareness programme and verification protocols to identify gaps before an attacker finds them first.

Contact us today to discuss how we can help your business or connect with us on LinkedIn to stay updated with more insights.

Frequently Asked Questions About Social Engineering

What is a social engineering attack?

A social engineering attack manipulates people rather than systems — tricking staff into revealing credentials, approving fraudulent payments, or granting access through deception rather than technical exploitation. These attacks exploit psychological triggers including trust, authority, urgency, and familiarity, and are consistently the most common entry point for serious cyber incidents affecting NZ businesses.

How has AI changed social engineering attacks in 2026?

AI has transformed social engineering from a manual, skill-dependent activity into an industrialised operation. Deepfake-as-a-Service platforms allow attackers to create voice clones from three seconds of audio, generate real-time video impersonations, and produce personalised phishing messages that precisely match internal communication styles. Vishing attacks using deepfake voices rose 170% in a single quarter of 2025, and human detection accuracy for high-quality deepfakes has dropped to as low as 24.5%.

What is a deepfake social engineering attack?

A deepfake social engineering attack uses AI-generated audio or video to impersonate a known person — typically an executive, IT staff member, or trusted supplier. The attacker clones the target’s voice or face using publicly available recordings, then makes calls or joins video meetings to request fraudulent payments, credential access, or data transfers. These attacks are significantly harder to detect because the voice and face match someone the recipient knows and trusts.

What is vishing and how does it relate to social engineering?

Vishing is voice-based social engineering conducted by phone. In 2026, vishing attacks frequently use AI-cloned voices to impersonate executives, IT personnel, or suppliers. The caller creates a plausible scenario requiring urgent action. Seventy percent of organisations have experienced at least one vishing attack. The primary defence is a callback verification protocol requiring confirmation through a known, pre-existing phone number — not the number that called you.

How can my team identify a social engineering attack in 2026?

In 2026, staff should not rely on detecting attacks by spotting poor grammar or suspicious links. The reliable indicators are behavioural: any request creating urgency, bypassing normal approval processes, requesting payment or credentials by phone or video alone, or asking for secrecy should trigger verification. The correct response to any such attempt is always a second, independent confirmation through a channel already on file.

What is pretexting in social engineering?

Pretexting is a social engineering technique in which an attacker creates a fabricated scenario to establish credibility before making a request — posing as an IT auditor, supplier, or bank representative. AI now generates highly detailed prexts using information scraped from public sources, making this form of attack harder to detect than ever. The defence is consistent verification protocols that apply regardless of how plausible the scenario seems.

What is Deepfake-as-a-Service and why does it matter for NZ businesses?

Deepfake-as-a-Service refers to commercial platforms providing ready-to-use AI tools for voice cloning and video impersonation to anyone willing to pay — including criminals with no technical background. DaaS has dramatically lowered the barrier to launching sophisticated attacks, which is why deepfake-enabled fraud has grown 2,137% over three years. For NZ businesses, this means the threat of voice and video impersonation is now accessible to low-skill attackers.

How often should staff receive social engineering awareness training?

Quarterly at minimum. Annual training no longer reflects the pace at which these tactics evolve. Staff should receive updated training whenever a significant new attack method emerges — such as when AI voice cloning became operationally widespread. Training should be supplemented by realistic simulations including vishing calls and pretexting scenarios, not just phishing emails. Research shows consistent training reduces susceptibility by up to 70% over 12 months.

What verification protocols should NZ businesses have to prevent attacks?

Every NZ business should document a protocol requiring independent verification for any request involving payment authorisation, bank account changes, credential sharing, or access grants. Independent means a separate channel — not a reply to the requesting message and not a call to the number that initiated the contact. The protocol must apply regardless of how senior the apparent requestor is or how urgent the request appears, and following it should never result in disciplinary action.

How does Exodesk help NZ businesses defend against social engineering?

Exodesk provides security awareness training covering AI voice cloning and deepfake attacks, phishing and vishing simulations, security posture assessments, and verification protocol design for South Island businesses. Our teams are based in Christchurch and Dunedin and work with SMEs across healthcare, professional services, and construction on fixed-price managed security arrangements that include ongoing training updates as threats evolve.

Start typing and press Enter to search

Cloud Detection and Responsesocial engineering in cybersecurity Call Us Now