AI Weekly: The implications of self-driving tractors and coming AI regulations

Hear from the CIO, CTO and other C-level and senior executives on data and AI strategies at the Future of Work Summit on January 12, 2022. Learn more


It’s 2022, and growth in the AI ​​industry is slow – but nonetheless eventful – beginning. While the spread of the Omicron variant affects individual conferences, enterprises do not allow the epidemic to get in the way of technological progress.

John Deere previews a tractor that uses AI to find its own way to the farm and plows the land without notice. According to Wired’s Will Knight, he and – like him, self-driving tractors – can help alleviate the growing labor shortage in agriculture; Employment of agricultural workers is expected to grow by only 1% from 2019 to 2029. But they also raise questions about the role of human farmers, along with vendor lock-ins and robots.

For example, farmers may become increasingly dependent on the Deere system for decision making. The company could also use the data collected from Autonomous Tractors to develop the features behind the subscription by stripping farmers of their autonomy.

Driverless tractors are a microcosm of the growing role of automation across industries. While numerous reports warn that while AI may increase productivity, profitability and creativity, these benefits will not be evenly distributed. AI will complement roles in areas where there is no substitute for skilled workers such as health care. But in industries that rely on standard routines, AI has the potential to replace ancillary jobs.

A report from American University suggests that legislators bridge this gap by focusing on school curriculum restructuring to reflect changing skills demands. Regulation also has a role to play in preventing companies from monopolizing AI in certain industries to pursue consumer-hostile practices. The right solution – or, more precisely, a combination of solutions – remains elusive. But the mass-market advent of self-driving tractors is another reminder that technology often goes beyond policy-making.

Regulation algorithms

Speaking of regulators, China has more detailed plans this week to reduce the algorithms used in applications to recommend what consumers buy, read and view online. According to a report in the South China Morning Post, companies that use this type of “receiver” algorithm will need to “promote positive energy” by allowing users to reject suggestions offered by their services.

The move – which will affect corporate giants including Bytdance, owner of Alibaba, Tencent and Tiktok – is aimed at boosting the Chinese tech industry. But it also reflects widespread efforts by governments to curb the misuse of AI technology for profit at any cost.

Ahead of the European Union’s (EU) Comprehensive AI Act, a government think tank in India has proposed the AI ​​Monitoring Board to establish a framework for “enforcing responsible AI principles”. In the UK, the government has launched a national standard for algorithmic transparency, recommending that public sector organizations in the country explain how they are using AI to make decisions. And U.S. In, the White House issued draft guidelines that include guidelines for U.S. agencies on whether and how to regulate AI.

A recent Deloitte report predicts that 2022 will see an increase in the debate over regulating AI “more systematically”, although co-authors acknowledge that proposals for regulation are likely to be implemented in 2023 (or later). Some jurisdictions may even seek sanctions – and, indeed, Have Restricted – All AI subfields, such as face recognition in public spaces and social scoring systems, report.

Why now AI is becoming more widespread and ubiquitous, attracting more regulatory scrutiny. The implications of technology for fairness, bias, discrimination, diversity and privacy are also coming to the fore, as is the geopolitical advantage that AI rules can give to countries that implement them sooner.

Regulating AI will not be easy. AI systems are difficult to audit, and there is not always a guarantee that the data used to train them is “error-free and complete” (as required by the EU’s AI Act). In addition, countries may pass conflicting rules that make it more challenging for companies to comply. But Deloitte presents the emergence of the “gold standard” as the best-case scenario, as happened with the EU’s General Data Protection Regulation around privacy.

“More rules on AI will be enacted in the near future. While it’s not clear what those rules will look like, it’s likely they will physically affect AI use, “Deloitte writes. It’s a safe bet.

For AI coverage, send news tips to Kyle Wiggers – and be sure to subscribe to the AI ​​weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

Senior Staff Writer

Venturebeat

VentureBeat’s mission is to become a digital town square for technical decision makers to gain knowledge about transformative technology and practices. Our site delivers essential information on data technologies and strategies so you can lead your organizations. We invite you to access, to become a member of our community:

  • Up-to-date information on topics of interest to you
  • Our newsletters
  • Gated idea-leader content and discounted access to our precious events, such as Transform 2021: Learn more
  • Networking features and more

Become a member

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *