LIKE many Malaysians, I often have to remind my colleagues, neighbours and friends that chat groups are not the best place to discuss politics, especially topics on race relations and religion.
Some of us often forget that participants in chat groups may not necessarily share the same sentiments and enthusiasm. Chat groups are created for specific agendas and purposes, but we do go off-track sometimes.
The workplace is no different. Divergent opinions can lead to creativity and better ways of doing things once a consensus is reached. However, it can also result in strong disagreements and even conflict, potentially breaking a team.
As managers, we are familiar with such situations. Managers must always think about how best to manage divergent opinions in professional settings.
As we come to the end of 2024 and brace for an uncertain 2025, in times of political upheaval, such as the new US president and increased geopolitical tensions affecting every region in the world, it is also a good time to focus on managing our backyard.
With 2025 on the horizon, it is a good time to focus on managing our backyard
The bigger challenge requiring managers’ attention in 2025 is the march of AI
AI will impact every department and section, with no exceptions
Being respectful and professional is always key, according to the Chartered Management Institute’s (CMI) tips for managers – be brave enough to shut down conversations if they make some colleagues feel uncomfortable.
It is important to remind teams that the workplace is not always the best place for heated political discussions, especially if they prove unproductive and inconsequential to work.
The bigger challenge requiring managers’ attention in 2025 is the march of artificial intelligence (AI) in the workplace. Forget about scheming and untrustworthy politicians.
AI is the number one priority – the better it is managed, the more likely organisations are to adopt it successfully and avoid potential pitfalls. The good news is that the Malaysian Employers Federation (MEF) believes that a significant portion of companies in Malaysia are proactive in this regard.
MEF president Datuk Syed Hussain Syed Husman cites the Cisco AI Readiness Index survey conducted in November last year, which revealed that 46% of Malaysian organisations are prepared to adopt AI technology in line with the Fourth Industrial Revolution (IR 4.0). The study indicated that 13% of these entities are fully ready, with an additional 33% classified as partially ready.
For AI to take off, the positive impact of management and leadership on organisational performance is well-documented, including by Haskel et al (2007) in the United Kingdom and Bloom et al (2010), which found better management led to productivity increases of 13% to 17%.
Data from the UK’S Office of National Statistics shows that companies with high management practices are significantly more likely to drive tech and AI adoption. The research found that companies with top-tier management scores are significantly more likely to adopt AI (37% in the top decile compared to just 3% in the bottom) and to recognise its relevance.
While only 32% of top-performing companies see AI as inapplicable, this figure rises sharply to 74% among those with lower management scores.
However, CMI research reveals that anxiety around AI technologies remains widespread, with over two in five (44%) UK managers reporting concerns raised by colleagues and direct reports about new and emerging AI tools within their organisations.
Alarmingly, fewer than one in 10 managers (9%) believe their organisation is adequately equipped to work with AI, with most receiving little to no training on how to manage or integrate these technologies effectively.
Researchers have found that managers will increasingly play a critical role in interpreting Ai-generated insights, ensuring these align with organisational goals, and making judgment calls that require human intuition and ethical consideration.
AI will impact every department and section, with no exceptions. For the human resources manager, they will need to determine whether AI is writing recruits’ curriculum-vitae and cover letters.
If so, should this be a cause for concern? Are graduates making themselves more attractive to employers by demonstrating a willingness to use AI? Or does this come across as lazy or lacking in creativity?
What does it tell potential employers? Is it deceitful or clever? And should employers be using Ai-detection software?
For news editors in TV studios and newsrooms, shouldn’t they be leading the charge to use AI to eliminate tedious work, allowing staff to focus on creativity and more purposeful tasks?
As we end the year, some companies are still struggling with hybrid working.
It is safe to say that most Malaysian employers have insisted their staff return to the office physically.
This will also be the last year when public listed companies are allowed to conduct annual general meetings for shareholders solely online.
Beginning next year, public listed companies must have physical annual general meetings, with online participation as an additional option.
As we approach the fifth anniversary of the pandemic, the challenge for 2025 will be for managers to ensure they get it right.
For Malaysian managers still holding on to the hybrid workplace, they would know by now if it is still effective. - WONG CHUN WAI Award-winning veteran journalist and Bernama chairman
World ID offers a revolutionary aproach to verifying humanness without compromissing personal data
Users can quickly sign up for a verified World Id at an orb and use it to authenticate actions, like signing into websites, without sharing personal information.
World ID offers a revolutionary aproach to verifying humanness without compromissing personal data
AS artificial intelligence (AI) continues to evolve, it is becoming increasingly adept at replicating human behaviour online, blurring the line between genuine and automated interactions.
In the wrong hands, AI can be a potent tool for spreading misinformation, phishing scams, fraud, and data breaches – a growing concern as the world moves further into the digital realm.
Recognising these risks, tech visionaries Sam Altman of OpenAI and Alex Bania of Tools for Humanity saw the need for a privacy-focused human verification and financial network, leading to the creation of World - previously known as WorldCoin.
“Altman believes humanity needs a ‘human gate,’ where certain online activities or products are restricted to verified individuals,” explains World’s Europe managing director Fabian Bodensteiner.
But that begs the question: How can they prove that someone is human?
The answer? World ID – a digital protocol developed by World that confirms a person’s humanity or proof of humanness without sharing personal information.
A digital proof of humanness
When a person verifies their World ID via an Orb, the device takes pictures of their iris and face.
These pictures are used to make a unique iris code, a series of 1s and 0s. No two iris codes are the same, and they do they reveal direct identifiers such as name, gender, age, etc.
The code is then split into different pieces and permanently encrypted using Secure Multi-Party Computation (SMPC), which anonymises data by dividing it into multiple abstracted values (SMPC shares) and storing them in separate locations managed by different parties.
Each party only has access to the SMPC share under their control.
Bodensteiner likens World ID to a digital passport stored on a user’s mobile device via the World App, which supports World ID.
“We didn’t want to follow the standard Know Your Customer (KYC) process, which often requires users to share personal details like names and addresses,” Bodensteiner notes, highlighting their goal to help businesses reduce data collection for privacy purposes.
“Think of it like this – just as you have a national ID or driver’s licence, World ID offers an additional anonymous credential: a digital proof of humanness” he says.
The proof of humanness verification naturally limits the creation of multiple fake accounts, curbing large-scale bot attacks and ensuring content is from genuine individuals – an essential step in reducing AI-generated disinformation.
Flexible across sectors
To date, more than six million people have verified their World ID, reflecting the growing adoption of this revolutionary technology worldwide.
In Malaysia, World sees tremendous potential for expansion, driven by the country’s openness to new technologies and its diverse economy, which positions it as a strategic gateway to further extend into Asia.
“Malaysia’s openness to new technologies and its diverse economy make it a strategic gateway for further expansion into Asia,” says Bodensteiner.
Bodensteiner says the proof of humanness protocol enhances online security and accountability in the age of AI.
He adds that all World technologies, including hardware, are open-source, enabling innovation and collaboration across various sectors.
The World ID technology allows for seamless authentication across web and mobile platforms.
Its applications extend across multiple sectors, such as gaming, social networking, and marketing, where personhood verification is crucial to reducing fake accounts and ensuring genuine human interaction.
“For instance, video gaming platforms can benefit from personhood verification by allowing individuals to unlock exclusive deals, enhancing the gaming experience while keeping the ecosystem free from bot-driven accounts,” Bodensteiner explains.
Local social networks and e-commerce platforms can also leverage World ID to enhance safety and prevent fraudulent activities, such as repeated voucher redemptions, ensuring a more secure and beneficial environment for legitimate users.
Recently, Worldcoin rebranded itself as World, signaling a broader mission to build a comprehensive identity and financial network that empowers every individual in the digital economy.
The shift reflects the project’s focus on creating a global network centered on anonymous proof-of-humanness technology and inclusive financial tools.
This new identity aims to drive the mission forward with a more unified and holistic approach.
World was born from a need to address these challenges and build a system that ensures equal access to the digital economy for all, regardless of financial circumstances or location – especially as AI continues to advance.
World ID remains the bedrock of this mission, empowering people to take charge of their privacy online, paving the way for a more inclusive and equitable future for everyone.
Currently, the Orbs are situated in a few locations across the country, with plans to expand to more sites over time. Individuals who have downloaded the World App can now schedule an appointment to have their World ID verified.
Big draw: Nvidia chief executive officer Jensen Huang speaking at an industry event in California. Some big tech companies are trying to show developers how to migrate away from Nvidia’s dominance in AI. — Bloomberg
[1/2]A smartphone with a displayed NVIDIA logo is placed on a computer motherboard in this illustration taken March 6, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Purchase Licensing Rights
Nvidia's grip on AI by targeting software.html
Alliance seeks to use alternative open source software
SAN FRANCISCO: Nvidia earned its US$2.2 trillion market cap by producing artificial-intelligence (AI) chips that have become the lifeblood powering the new era of generative AI developers from startups to Microsoft, OpenAI and Google parent Alphabet .
Almost as important to its hardware is the company’s nearly 20 years’ worth of computer code, which helps make competition with the company nearly impossible. More than four million global developers rely on Nvidia’s Cuda software platform to build AI and other apps.
Now a coalition of tech companies that includes Qualcomm, Google and Intel plans to loosen Nvidia’s chokehold by going after the chip giant’s secret weapon: the software that keeps developers tied to Nvidia chips.
They are part of an expanding group of financiers and companies hacking away at Nvidia’s dominance in AI.
“We’re actually showing developers how you migrate out from an Nvidia platform,” Vinesh Sukumar, Qualcomm’s head of AI and machine learning, said in an interview with Reuters.
Starting with a piece of technology developed by Intel called OneAPI, the UXL Foundation, a consortium of tech companies, plans to build a suite of software and tools that will be able to power multiple types of AI accelerator chips, executives involved with the group told Reuters.
The open-source project aims to make computer code run on any machine, regardless of what chip and hardware powers it.
“It’s about specifically – in the context of machine learning frameworks – how do we create an open ecosystem, and promote productivity and choice in hardware,” Google’s director and chief technologist of high-performance computing, Bill Magro, said in an interview.
Google is one of the founding members of UXL and helps determine the technical direction of the project, Magro said.
UXL’s technical steering committee is preparing to nail down technical specifications in the first half of this year. Engineers plan to refine the technical details to a “mature” state by the end of the year, executives said.
These executives stressed the need to build a solid foundation to include contributions from multiple companies that can also be deployed on any chip or hardware.
Beyond the initial companies involved, UXL will court cloud-computing companies such as Amazon.com and Microsoft’s Azure, as well as additional chipmakers.
Since its launch in September, UXL has already begun to receive technical contributions from third parties that include foundation members and outsiders keen on using the open-source technology, the executives involved said.
Intel’s OneAPI is already usable, and the second step is to create a standard programming model of computing designed for AI.
UXL plans to put its resources toward addressing the most pressing computing problems dominated by a few chipmakers, such as the latest AI apps and high-performance computing applications.
Those early plans feed into the organisation’s longer-term goal of winning over a critical mass of developers to its platform.
UXL eventually aims to support Nvidia hardware and code, in the long run.
When asked about the open source and venture-funded software efforts to break Nvidia’s AI dominance, Nvidia executive Ian Buck said in a statement: “The world is getting accelerated. New ideas in accelerated computing are coming from all across the ecosystem, and that will help advance AI and the scope of what accelerated computing can achieve.”
The UXL Foundation’s plans are one of many efforts to chip away at Nvidia’s hold on the software that powers AI. Venture financiers and corporate dollars have poured more than US$4bil into 93 separate efforts, according to custom data compiled by PitchBook at Reuters’ request.
The interest in unseating Nvidia through a potential weakness in software has ramped up in the last year, and startups aiming to poke holes in the company’s leadership gobbled up just over US$2bil in 2023 compared with US$580mil from a year ago, according to the data from PitchBook.
Success in the shadow of Nvidia’s group on AI data crunching is an achievement that few of the startups will be able to achieve.
Nvidia’s Cuda is a compelling piece of software on paper, as it is full-featured and is consistently growing both from Nvidia’s contributions and the developer community. — Reuters
Prime Minister of the Netherlands Mark Rutte is on a working visit to China from Tuesday to Wednesday, at a time when the Netherlands' chip export policy has been in the spotlight. Chinese analysts pointed out that Rutte's visit is perfectly timed, as his country's largest company, the semiconductor equipment maker ASML, is seeking to expand outside the Netherlands after raising concerns about the country's business climate.
The Boao Forum for Asia (BFA) convened its annual conference in Boao, South China's Hainan Province on Tuesday, with officials and other participants calling for unity and cooperation in Asia and around the world to jointly tackle rising regional and global challenges, ranging from trade protectionism to geopolitical tension.
Central bank officials and financial regulators from several Asian countries on Wednesday called for expanding currency swap arrangements through both bilateral and multilateral mechanisms to support cross-border trade and investment and reduce reliance on the US dollar, as part of the region's efforts to bolster financial safety nets and fend ...
Central bank officials and financial regulators from several Asian countries on Wednesday called for expanding currency swap arrangements through both bilateral and multilateral mechanisms to support cross-border trade and investment and reduce reliance on the US dollar, as part of the region's efforts to bolster financial safety nets and fend off risks.
3 Reasons Why AI Generated Deepfakes Are a Growing Concern The Era of Digital Deception
Sophisticated scam technology harnessing artificial intelligence is capable of deceiving even the most vigilant.
COMPUTER-GENERATED children’s voices that fool their own parents. Masks created with photos from social media deceive a system protected by face Id.
They sound like the stuff of science fiction, but these techniques are already available to criminals preying on everyday consumers.
The proliferation of scam tech has alarmed regulators, police, and people at the highest levels of the financial industry. artificial intelligence (ai) in particular is being used to “turbocharge” fraud, US Federal Trade Commission chair Lina Khan warned in June, calling for increased vigilance from law enforcement.
Even before ai broke loose and became available to anyone with an Internet connection, the world was struggling to contain an explosion in financial fraud.
In the United States alone, consumers lost almost Us$8.8bil (Rm40.9bil) last year, up 44% from 2021, despite record investment in detection and prevention. Financial crime experts at major banks, including Wells Fargo and Co and deutsche Bank ag, say the fraud boom on the horizon is one of the biggest threats facing their industry.
On top of paying the cost of fighting scams, the financial industry risks losing the faith of burned customers.
“It’s an arms race,” says James Roberts, who heads up fraud management at the Commonwealth Bank of australia, the country’s biggest bank.
“It would be a stretch to say that we’re winning.”
The history of scams is surely as old as the history of trade and business.
One of the earliest known cases, more than 2,000 years ago, involved a greek sea merchant who tried to sink his ship to get a fraudulent payout on an insurance policy.
Look back through any newspaper archive, and you’ll find countless attempts to part the gullible from their money.
But the dark economy of fraud, just like the broader economy, has periodic bursts of destabilising innovation.
new technology lowers the cost of running a scam and lets the criminal reach a larger pool of unprepared victims.
Email introduced every computer user in the world to a cast of hard-up princes who needed help rescuing their lost for tunes.
Crypto brought with it a blossoming of Ponzi schemes that spread virally over social media.
The future of fake
The ai explosion offers not only new tools but also the potential for life-changing financial losses.
and the increased sophistication and novelty of the technology mean that everyone, not just the credulous, is a potential victim.
The Covid-19 lockdowns accelerated the adoption of online banking around the world, with phones and laptops replacing face-to-face interactions at bank branches.
It’s brought advantages in lower costs and increased speed for financial firms and their customers, as well as openings for scammers.
Some of the new techniques go beyond what current off-theshelf technology can do, and it’s not always easy to tell when you’re dealing with a garden-variety fraudster or a nation-state actor.
“We are starting to see much more sophistication with respect to cybercrime,” says amy Hoganburney, general manager of cybersecurity policy and protection at Microsoft Corp.
Globally, cybercrime costs, including scams, are set to hit US$8 trillion (RM37.18 trillion) this year, outstripping the economic output of Japan, the world’s third-largest economy.
By 2025, it will reach US$10.5 trillion (RM48.8 trillion), after more than tripling in a decade, according to researcher Cybersecurity Ventures.
In the Sydney suburb of Redfern, some of Roberts’ team of more than 500 spend their days eavesdropping on cons to hear firsthand how ai is reshaping their battle.
a fake request for money from a loved one isn’t new. But now parents get calls that clone their child’s voice with ai to sound indistinguishable from the real thing.
These tricks, known as social engineering scams, tend to have the highest hit rates and generate some of the quickest returns for fraudsters.
Today, cloning a person’s voice is becoming increasingly easy.
Once a scammer downloads a short sample from an audio clip from someone’s social media or voicemail message – it can be as short as 30 seconds – they can use ai voice-synthesising tools readily available online to create the content they need.
Public social media accounts make it easy to figure out who a person’s relatives and friends are, not to mention where they live and work and other vital information.
Bank bosses stress that scammers, who run their operations like businesses, are prepared to be patient, sometimes planning attacks for months.
What fraud teams are seeing so far is only a taste of what ai will make possible, according to Rob Pope, director of new Zealand’s government cybersecurity agency, CERT nz.
He points out that ai simultaneously helps criminals increase the volume and customisation of their attacks.
“It’s a fair bet that over the next two or three years we’re going to see more ai-generated criminal attacks,” says Pope,
a former deputy commissioner in the New Zealand Police who oversaw some of the nation’s highest-profile criminal cases. “What AI does is accelerate the levels of sophistication and the ability of these bad people to pivot very quickly. AI makes it easier for them.”
To give a sense of the challenge facing banks, Roberts says right now the Commonwealth Bank of Australia is tracking about 85 million events a day through a network of surveillance tools.
That’s in a country with a population of just 26 million.
The industry hopes to fight back by educating consumers about the risks and increasing investment in defensive technology.
New software lets CBA spot when customers use their computer mouse in an unusual way during a transaction – a red flag for a possible scam.
Anything suspicious, including the destination of an order and how the purchase is processed, can alert staff in as few as 30 milliseconds, allowing them to block the transaction.
At Deutsche Bank, computer engineers have recently rebuilt their suspicious transaction detection system, called Black Forest, using the latest natural language processing models, according to Thomas Graf, a senior machine learning engineer there.
The tool looks at transaction criteria such as volume, currency, and destination and automatically learns from reams of data what patterns suggest fraud.
The model can be used on both retail and corporate transactions and has already unearthed several cases, includone ing involving organised crime, money laundering, and tax evasion.
Wells Fargo has overhauled its tech systems to counter the risk of Ai-generated videos and voices. “We train our software and our employees to be able to spot these fakes,” says Chintan Mehta, Wells Fargo’s head of digital technology. But the system needs to keep evolving to keep up with the criminals. Detecting scams, of course, costs money.
The digital dance
One problem for companies: Every time they tighten things, criminals try to find a workaround.
For example, some US banks require customers to upload a photo of an ID document when signing up for an account.
Scammers are now buying stolen data on the dark web, finding photos of their victims on social media, and 3D-printing masks to create fake IDS with the stolen information.
“And these can look like everything from what you get at a Halloween shop to an extremely lifelike silicone mask of Hollywood standards,” says Alain Meier, head of identity at Plaid, which helps banks, financial technology companies, and other businesses battle fraud with its ID verification software. Plaid analyses skin texture and translucency to make sure the person in the photo looks real.
Meier, who’s dedicated his career to detecting fraud, says the best fraudsters, those running their schemes as businesses, build scamming software and package it up to sell on the dark web.
Prices can range from US$20 (RM95) to thousands of dollars.
“For example, it could be a Chrome extension to help you bypass fingerprinting or tools that can help you generate synthetic images,” he says.
As fraud gets more sophisticated, the question of who’s responsible for losses is getting more contentious.
In the United Kingdom, for example, victims of unknown transactions – say, someone copies and uses your credit card – are legally protected against losses.
If someone tricks you into making a payment, responsibility becomes less clear.
In July, the US top court ruled that a couple who were fooled into sending money abroad couldn’t hold their bank liable simply for following their instructions.
But legislators and regulators have leeway to set other rules: The government is preparing to require banks to reimburse fraud victims when the cash is transferred via Faster Payments, a system for sending money between UK banks.
Politicians and consumer advocates in other countries are pushing for similar changes, arguing that it’s unreasonable to expect people to recognise these increasingly sophisticated scams.
Banks worry that changing the rules would simply make things easier for fraudsters.
Financial industry leaders around the world are also trying to push a share of the responsibility onto tech firms.
The fastest-growing scam category is investment fraud, often introduced to victims through search engines where scammers can easily buy sponsored advertising spots.
When would-be investors click through, they often find realistic prospectuses and other financial data. Once they transfer their money, it can take months, if not years, to realise they’ve been swindled when they try to cash in on their “investment”.
In June, a group of 30 lenders in the UK sent a letter to Prime Minister Rishi Sunak asking that tech companies contribute to refunds for victims of fraud stemming from their platforms.
The government says it’s planning new legislation and other measures to crack down on online financial scams.
The banking industry is lobbying to spread responsibility more widely, in part because costs appear to be going up. Once again, a familiar problem from economics applies in the scam economy, too.
Like pollution from a factory, new technology is creating an externality, or a cost imposed on others. In this case, there’s a heightened reach and risk for scams.
Neither banks nor consumers want to be the only ones forced to pay the price.
Chris Sheehan spent almost three decades with the country’s police force before joining National Australia Bank Ltd, where he heads investigations and fraud.
He’s added about 40 people to his team in the past year with constant investment by the bank.
When he adds up all the staff and tech costs, “it scares me how big the number is”, he says.
“I am hopeful because there are technological solutions, but you never completely solve the problem,” he says. It reminds him of his time fighting drug gangs as a cop.
Framing it as a war on drugs was “a big mistake”, he says.
“I will never phrase it in that framework – of a war on scams – because the implication is that a war is winnable,” he says. “This is not winnable.” – Bloomberg