Distrust is blocking the four-day week

It’s time to reduce our working week, and we can achieve it by better directing our technology to benefit society.  Pioneering economist, John Maynard Keynes, predicted this way back in his 1930 essay “Economic Possibilities for our Grandchildren.”  He envisioned a future where “three-hour shifts or a fifteen-hour week” would suffice, arguing that a minimal amount of work would meet the needs of society.  Technology has promised us an easier working week for decades but has largely failed to deliver.

Keynes anticipated ongoing advances in technology.  After all, technology-driven productivity has the ability to lead to higher wages, reduced hours, higher profits, or lower prices.  However, planning to do all of these at once, or failing to plan at all, means that none of these are achieved.  It has proven too easy for the benefits of technology to turn into meaningless features and functions.  Rather than more productivity, we get more apps!

Generative AI offers the best opportunity in years to achieve the possibilities that Keynes promised.  But current trends to focus on controlling AI are misguided; we need to prioritise societal goals and governance supported, rather than led, by regulation.  This governance needs to be based on an educated view of the technology rather than misplaced fears of unemployment.  One of the main reasons we don’t accept reduced working hours is that we fear being displaced by the technology altogether!

The heightened awareness of AI after the release of ChatGPT is an opportunity for governments, industries and labour to work with communities to decide where their priorities lie.  Arguably this has been true for the entirety of the IT revolution in business, but as Robert Solow famously said, “You can see the computer age everywhere but in the productivity statistics”.  Learning from the past, this time we can capture the benefits of technology and agree where the dividends will be applied.

Distrust of technology and those that provide it isn’t new, but over recent years it has manifested in terms of fear of workforce displacement, dangers of the misuse of our data and suspicion of the decisions that are made by our systems.  These are the three areas, displacement, data and decisions, that we must work to address and build community confidence.  Arguably this is as true of all digital technologies in general as it is of AI, and Gen AI in particular.

It starts with the greatest fear of citizens, their displacement from the workforce.  The reality is that there is more than enough work to go around, but we do need to manage the changes in the jobs people do and the skills that they need to them.  Navigating these changes is more important than artificially holding onto work through conservative labour and work practices.  Managed properly, the benefits of more efficient work, through greater productivity, can be shared by everyone.  Some argue that Gen AI is different because it introduces human-like conceptual advice and decision making.  However, it is better to see Gen AI as simply adding to the portfolio of technology that we already rely on.  It will be as much a part of most workplaces as spreadsheets and word processing.

Fear of the misuse of our data is likely the second greatest fear of the community about AI behind only displacement.  Data governance and obligations have been around for a long time with most jurisdictions having requirements for the handling of personal data, privacy obligations, protecting copyright holders and other usage requirements.  All too often, however, we’re solving the wrong problem.  While we all want our privacy, many of the issues really relate to the nefarious use of our personal data to steal our identities.  Moves by many governments to formalise digital identity solutions could well reduce the current exposure where bad actors can leverage stolen dates of birth, personal history, passport numbers and more to access critical systems.

Reducing the harm of the misuse of data will take much of the angst out of this aspect of technology in general and AI in particular.  However, the need to accurately track the use of data in systems and, particularly in the training of AI, becomes ever more important as the third great fear continues to come into focus, the use of AI to make decisions that impact all of us in the real world.

It is the decisions that the technology makes on our behalf, or about us, that really lead many to worry about a dystopian future.  The algorithms embedded in our software, including AI, combine with data to make all manner of decisions.  We live in fear of a computerised “no” every time we apply for credit, submit a job application, get a quote for insurance or any manner of other important activities.  Algorithm-based decisions are not new, and nor are the problems that surround them.  However, the obligations on technologists are greater now than ever to explain the decisions that machines make and ensure that human safeguards are always available.

A collaboration among government, community, and business sectors on the three ‘D’s of displacement, data, and decisions could unlock a better future for everyone.  With this alignment, we can find the right balance between increased wages, reduced working hours, greater profits and reduced prices while deliberately planning to achieve them.  But all of this starts with building trust in our latest technology and learning from the past.

Leave a Reply

Your email address will not be published. Required fields are marked *