Preliminary research published at the end of March claimed that up to 80% of the US workforce could see at least 10% of their tasks impacted by large language models such as the generative pretrained transformer (GPT) underpinning ChatGPT.

First, note the caveats. A preprint paper has not yet been peer reviewed, though the open access ArXiv server where it was posted does have a moderation process. This particular preprint comes from OpenAI, makers of ChatGPT, along with OpenResearch, a research lab chaired by OpenAI CEO Sam Altman and the University of Pennsylvania. And it just happens to be the case that GPT-4 technologies were even used for some of the writing and coding on this research.

That said, some of the paper's conclusions chime with an economic study issued in December 2022 by the White House and the EU, which examined the potential impact of AI on future workforces.

This report didn't just look at the roles and tasks at risk of being overtaken by AI, but also at the broader impacts this technology might have across the workforce, such as the possible increase in surveillance of workers using AI, algorithmic management of work, income loss, the evolution of worker skills and mobility across jobs.

What both reports have in common is a warning that AI is not just a threat to jobs comprising repetitive digital tasks that are easily automated. In fact, many high-skill and high-paid jobs are highly exposed to AI. The economic report also warns that this leaves older workers particularly exposed, as they have had time to accrue the education and experience that can potentially be supplanted by an AI that is cheaper to employ or incredibly speedy with results – or both.

High-risk roles

When it comes to the specifics, both reports differ as to which roles are most at risk. OpenAI's research concludes that mathematicians, tax preparers, writers, web designers, accountants, journalists and legal secretaries are those most vulnerable to an AI takeover. The US-EU study says clinical lab technicians, chemical engineers, optometrists and power plant operators are most exposed.

Really, it all comes down to what these tools are capable of doing with a level of speed and accuracy that rivals or far supersedes human workers. In terms of skills, OpenAI's report identifies programming and writing as highly susceptible to the influence of large language models.

However, it's important to note that while generative AIs continue to "hallucinate" (generating confident responses that are counter to facts), its outputs can only be trusted under the supervision of trained and experienced professionals capable of calling out any missteps. That is to say, AI journalism will need good editors and fact-checkers to review content before publishing, and AI coding will need thorough testing and validation before deployment.

This is the knowledge gap AI may never be able to overcome. Even the most advanced AI systems, reaching more than 90% accuracy, appear to plateau at that level, unable to reach the ideal of 100% reliability. And therein lies a conundrum. If the future needs expert oversight of AI-generated outputs to ensure accuracy and trust, where will those experts come from if a generation of workers skips the learning and experience gained from doing the groundwork over years and years?

If lower-level tasks are farmed out to AI as an assistance to workers in the long-term, their experience may not be as valuable in the long-term. This has already been flagged as an issue among pilots, a profession where much of the work has been handed over to automation.

A 2013 report from a US working group examining flight deck automation identified vulnerabilities in pilots' knowledge and skills driven by an over-reliance on computer-assisted piloting and passive training in its use. This can leave pilots unprepared to take control of a flight when things went wrong, which can have disastrous results. The working group found that more than 60% of accidents reviewed involved a manual error.

AI-proof skills

Being more hands-on with your work is certainly a major advantage amid the AI revolution. While generative AI is racing ahead with the creation of digital content, plenty of work in the physical realm is being left to the dexterity of skilled human hands and creative thinking.

The joint US-EU report also cites jobs that require interpersonal skills – such as childcare and hospitality – as having very limited exposure to AI. According to OpenAI's research, jobs with a greater need for skills in science and critical thinking are less likely to be impacted by large language models. It concludes that the jobs least likely to be impacted by GPTs overall are in graphic design, search marketing strategy and financial management.

If you're looking for a job that has a good chance of surviving an abundance of AI entering the workforce, you can search for roles in these areas on the TechSpot Job Board. Here are just a few available right now:

Supply Chain Tech - Kinaxis Planning - Senior - US Consulting, EY, Dallas

A rapidly growing area for EY, the Supply Chain Tech - Kinaxis Planning will have plenty of opportunity to develop their skill set to keep up with the ever-growing demands of the digital landscape. You will focus on the evaluation, design, customization and optimization of digital and cloud based solutions. You will team with various EY groups with digital capabilities to pursue and deliver digital engagements and deliver solutions that will bring clients' digital vision and strategy to life.

ASIC Design Verification Engineer, Accenture, Houston

Accenture is seeking an experienced ASIC Design Verification Engineer to design verification services for complex multi-CPU/DSP SoC on the most advanced technology notes. You'll build a verification environment using SV/UVM methodology, build reusable bus functional models, monitors, checkers and scoreboards and drive coverage driven verification closure. To apply, a minimum of two years' of experience with RTL design and verification is required as is a Bachelor's degree or 12 years' of work experience.

Systems Engineer, CACI, Sterling

As a Systems Engineer, you will be responsible for engaging in design, development, test, and delivery of Caci's products in support of advanced systems. You will be working within multidisciplinary hardware, software, system integration, and data analysis/collection teams who are creating systems in support of the Department of Defense and other Intelligence Community agencies. You will translate customer requirements into technical requirements and specifications, while coordinating a cross-functional development team to ensure the continued timely delivery of impactful products to customers.