Posts

Where-is-AI-useful-in-Business

If you’re planning to use AI in your business, be careful

Artificial intelligence is set to affect nearly 40% of all jobs, according to a new analysis by the International Monetary Fund (IMF).

While that might sound like good news in terms of reducing your overheads, particularly payroll costs, there is a downside to using AI.

The IMF warns “The technology is facing increased regulation around the world. Last month, European Union officials reached a provisional deal on the world’s first comprehensive laws to regulate the use of AI.”

Generative AI is, put simply, AI that can quickly create new content, be it words, images, music or videos. And it can take an idea from one example, and apply it to an entirely different situation.

This has already led to court cases for copyright infringement as it is not possible to establish where the information generated has come from, usually multiple sources.

Ben Wood, chief analyst at CCS Insight, says: “regulation and legal battles might cool off the current mania for generative AI.”

AI & Cyber Security

AI & Cyber Security

According to a survey by PWC 37% of the 3,900 companies they asked were worried that they were “highly or extremely exposed to cyber risks”.

While three fifths saw AI as a positive cyber and digital risks were top-of-mind in 2023, with those leaders responsible for managing risk ranking cyber higher than inflation.

More than ever, this emphasises the need for robust processes in business to guard against hacking and other cyber security risks.

This means ensuring that only those who need it have access to sensitive data.

It also means having a robust password system including regular changes and two factor authentication.

Plus, it is wise to ensure that all employees are trained to be security aware online and are kept regularly updated as new threats emerge.

AI-Friend-or-Enemy

AI – Friend or enemy?

The risks from AI should be treated as seriously as the climate crisis, according to one of the technology’s leading figures.

He was speaking ahead of a UK-hosted summit on the safety of AI due to be held on 1 and 2 November at Bletchley Park, the base for the code breakers in World War 2.

He was advocating the creation of a body similar to the Intergovernmental Panel on Climate Change (IPCC).

Among the risks cited were “aiding the creation of bioweapons and the existential threat posed by super-intelligent systems.”

His call has been echoed by others including Eric Schmidt, the former Google chief executive, and Mustafa Suleyman, the co-founder of DeepMind.

While there is no denying the immense opportunities in the use of AI.

There is no denying that AI has its uses, particularly for routine tasks, freeing up human resources for more creative activity.

But it is a powerful tool and thus susceptible to abuse without proper regulation and oversight.

What do you think?

AI-Copyright

Copyright and AI

It had to happen eventually.

Writers, artists and others are realising that they need to protect themselves as it has become clearer that AI uses multiple sources to find the information people are using for research.

The information collected by AI is not attributed so it is impossible to know where it has come from.

According to a BBC investigation: “The new wave of generative AI systems are trained on vast amounts of data – text, images, video, and audio files, all scraped from the internet. Content can be created within seconds of a simple text prompt.”

There has been a growing number of lawsuits about the issue, including one by Getty Images earlier this year.

There clearly needs to be more regulation of the issue and artists and writers in particular are campaigning for copyright laws to be updated to reflect the new environment created by AI.

According to the BBC:

“The EU appears to be taking the lead, with the EU AI Act proposing that AI tools will have to disclose any copyrighted material used to train their systems.

In the UK, a global summit on AI safety will take place this autumn.”

AI-Wont-takeover

AI will not take over the world

AI will not take over the world

In a recent blog we discussed the reliability of AI and automation and the fact that these systems are devised by human beings, highly skilled human beings to be sure, but human beings make mistakes.

Wired has just published two further articles exploring the issue of AI.

In the first it explores whether it is possible to make AI technology completely unbiased and also asks how many businesses benefit as much as they could from AI technology.

It reports that the return on business investment in AI has declined by 27 per cent over the last five years.

The reason, it argues, is that “companies don’t know how to make the most of AI and data analytics, and how they can apply to business problems.”

It also suggests that businesses get things the wrong way wound when considering investing in AI, so that they under-use its potential. It advises that businesses should “start by drawing up a list of business challenges and prioritise them by whether or not they can be addressed by using AI and the expected return on investment”.

The second article, by Joi Ito, director or MIT’s Media Lab, questions the assumption that AI can and will supercede humans in almost every sphere of activity.

Ito calls this assumption singularity in which those people who have succeeded in mastering the power of AI capture all the wealth and power.

This, Ito argues is “reductionist” thinking and only works for a very narrow range of learning and thinking which can lead to over-simplified ways of “fixing” humanity’s problems.

However, Ito says, most of the challenges we face today, such as climate change, poverty, chronic disease or modern terrorism have actually been the result of this reductionist thinking and we need to respect that many human problems are actually much more complex.

Machines, and therefore AI, need to be adaptive and to augment, not replace, humans. “not artificial intelligence but extended intelligence”.