Posted on

OpenAI Said to Aim to Attract More Investment by Removing ‘AGI’ Clause With Microsoft

OpenAI is in discussions to remove a clause that shuts Microsoft out of the start-up’s most advanced models when it achieves “artificial general intelligence”, as it seeks to unlock future investments, the Financial Times reported on Friday.

As per the current terms, when OpenAI creates AGI – defined as a “highly autonomous system that outperforms humans at most economically valuable work” – Microsoft’s access to such a technology would be void.

The ChatGPT-maker is exploring removing the condition from its corporate structure, enabling Microsoft to continue investing in and accessing all OpenAI technology after AGI is achieved, the FT reported, citing people familiar with the matter.

Microsoft and OpenAI did not immediately respond to Reuters’ requests for comment.

The clause was included to protect the technology from being misused for commercial purposes, giving its ownership to OpenAI’s non-profit board.

“AGI is explicitly carved out of all commercial and IP licensing agreements,” according to OpenAI’s website.

The OpenAI board would determine when AGI is achieved, the website said.

OpenAI’s board is discussing the options and a final decision has not been made, the FT report said.

Microsoft-backed OpenAI was working on a plan to restructure its core business into a for-profit benefit corporation no longer governed by its non-profit board, Reuters reported first in September.

In October, OpenAI closed a $6.6 billion funding round which valued it at $157 billion.

© Thomson Reuters 2024

Posted on

Amazon Web Services (AWS) Launches Automated Reasoning Checks in Preview to Combat AI Hallucinations

Amazon Web Services (AWS) launched a new service at its ongoing re:Invent conference that will help enterprises reduce instances of artificial intelligence (AI) hallucination. Launched on Monday, the Automated Reasoning checks tool is available in preview and can be found within the Amazon Bedrock Guardrails. The company claimed that the tool mathematically validates the accuracy of responses generated by large language models (LLMs) and prevents factual errors from hallucinations. It is similar to the Grounding with Google Search feature which is available on both the Gemini API as well as the Google AI Studio.

AWS Automated Reasoning Checks

AI models can often generate responses that are incorrect, misleading, or fictional. This is known as AI hallucination, and the issue impacts the credibility of AI models, especially when used in an enterprise space. While companies can somewhat mitigate the issue by training the AI system on high-quality organisational data, the pre-training data and architectural flaws can still make the AI hallucinate.

AWS detailed its solution to AI hallucination in a blog post. The Automated Reasoning checks tool has been introduced as a new safeguard and is added in preview within Amazon Bedrock Guardrails. Amazon explained that it uses “mathematical, logic-based algorithmic verification and reasoning processes” to verify the information generated by LLMs.

The process is pretty straightforward. Users will have to upload relevant documents that describe the rules of the organisation to the Amazon Bedrock console. Bedrock will automatically analyse these documents and create an initial Automated Reasoning policy, which will convert the natural language text into a mathematical format.

Once done, users can move to the Automated Reasoning menu under the Safeguards section. There, a new policy can be created and users can add existing documents that contain the information that the AI should learn. Users can also manually set processing parameters and the policy’s intent. Additionally, sample questions and answers can also be added to help the AI understand a typical interaction.

Once all of this is done, the AI will be ready to be deployed, and the Automated Reasoning checks tool will automatically verify in case the chatbot provides any incorrect responses. Currently, the tool is available in preview in only the US West (Oregon) AWS region. The company plans to roll it out to other regions soon.

Posted on

World Labs Unveils AI System That Can Generate 3D Interactive Worlds Using an Image

World Labs, the artificial intelligence (AI) startup, unveiled its first AI system on Monday. The currently unnamed AI system can generate interactive 3D worlds using an image input. These generated worlds turn the 2D visual asset into explorable 3D scenes where users can navigate using a keyboard and mouse. The AI system is currently in preview and has not been made public. However, the startup, which was founded by the computer scientist Fei-Fei Li, stated that it is working to release the full version soon.

World Labs Unveils AI System Capable of Generating 3D Worlds

In a blog post, the San Francisco-based startup showcased the capabilities of its AI model. World Labs highlighted that most generative AI tools today can create 2D content such as images or videos. Although some AI firms do generate 3D models from 2D images or text prompts, the scope is pretty limited. In recent times, only Google DeepMind has unveiled an AI model that generates unique 2D video game levels.

However, based on the interactive assets shared by the startup, the unnamed AI system’s capabilities surpass the generative capabilities seen so far. Put short, the company claims users can add any image that depicts a scene as input, and the AI model can generate a 3D interactive version of the scene. This means users can move forward, backwards, and side-to-side and explore the generated area.

The AI model does not only generate three-dimensional renders of the objects in the image, it also creates unseen details such as new alleyways, ceiling art, new objects and more from scratch. World Labs claim that apart from the initial image, everything is generated by the AI system.

Additionally, the generated scenes can also be modified. Users can change camera angles, depth, and zoom as well as add 3D effects to the background as well as the objects in the foreground.

World Labs’ AI system can also be integrated with other AI tools. The startup said this will allow creators to first generate the starting image using a familiar text-to-image generator such as Ideogram, Dall-E, or Midjourney and then create a 3D world using the startup’s tool. The AI firm is currently working with a few creators to test the AI system’s capabilities and its 3D-native generative AI workflow.

As of now, the AI system is not available in the public domain, and the startup highlighted that it is still working on it to improve the size and the fidelity of the generated worlds. However, interested individuals can join the company’s waitlist to be informed when the AI system is released.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.

Samsung Galaxy Z Fold 7, Galaxy Z Flip 7 to Debut With Larger Displays: Report

Posted on

OpenAI Sued by Canadian News Companies Over Alleged Copyright Breaches

Five Canadian news media companies filed a legal action on Friday against ChatGPT owner OpenAI, accusing the artificial-intelligence company of regularly breaching copyright and online terms of use.

The case is part of a wave of lawsuits against OpenAI and other tech companies by authors, visual artists, music publishers and other copyright owners over data used to train generative AI systems. Microsoft is OpenAI’s major backer.

In a statement, Torstar, Postmedia, The Globe and Mail, The Canadian Press, and CBC/Radio-Canada said OpenAI was scraping large swaths of content to develop its products without getting permission or compensating content owners.

“Journalism is in the public interest. OpenAI using other companies’ journalism for their own commercial gain is not. It’s illegal,” they said.

A New York federal judge dismissed a lawsuit on Nov. 7 against OpenAI that claimed it misused articles from news outlets Raw Story and AlterNet.

In an 84-page statement of claim filed in Ontario’s superior court of justice, the five Canadian companies demanded damages from OpenAI and a permanent injunction preventing it from using their material without consent.

“Rather than seek to obtain the information legally, OpenAI has elected to brazenly misappropriate the News Media Companies’ valuable intellectual property and convert it for its own uses, including commercial uses, without consent or consideration,” they said in the filing.

“The News Media Companies have never received from OpenAI any form of consideration, including payment, in exchange for OpenAI’s use of their Works.”

In response, OpenAI said its models were trained on publicly available data, grounded in fair use and related international copyright principles that were fair for creators.

“We collaborate closely with news publishers, including in the display, attribution and links to their content in ChatGPT search, and offer them easy ways to opt out should they so desire,” a spokesperson said via email.

The Canadian news companies’ document did not mention Microsoft. This month, billionaire Elon Musk expanded a lawsuit against OpenAI to include Microsoft, alleging the two companies illegally sought to monopolize the market for generative AI and sideline competitors.

© Thomson Reuters 2024

Posted on

Nvidia CEO Jensen Huang Says ‘The Age of AI Has Started’

Nvidia CEO Jensen Huang said on Saturday that global collaboration and cooperation in technology will continue, even if the incoming U.S. administration imposes stricter export controls on advanced computing products.

President-elect Donald Trump, in his first term in office, imposed a series of restrictions on the sale of U.S. technology to China citing national security concerns – a policy broadly continued under incumbent President Joe Biden.

“Open science in global collaboration, cooperation across math and science has been around for a very long time. It is the foundation of social advancement and scientific advancement,” Huang told media during a visit to Hong Kong.

Global cooperation is “going to continue. I don’t know what’s going to happen in the new administration, but whatever happens, we’ll balance simultaneously compliance with laws and policies, continue to advance our technology and support and serve customers all over the world.”

Earlier on Saturday Huang told graduates and academics at the Hong Kong University of Science and Technology that “the age of AI has started” in a speech after receiving an honorary doctorate degree in engineering.

The head of the world’s leading maker of chips used for artificial intelligence applications received the award alongside actor Tony Leung, Nobel Prize for Chemistry winner Prof. Michael Levitt and Fields Medallist Prof. David Mumford.

“The age of AI has started. A new computing era that will impact every industry and every field of science,” said Huang.

He said Nvidia has “reinvented computing and sparked a new industrial revolution,” 25 years after inventing the graphics processing unit.

“AI is certainly the most important technology of our time, and potentially of all times.”

Huang, 61, also told graduates that he wished he had started his career at this time.

“The whole world is reset. You’re at the starting lines with everybody else. An industry is being reinvented. You now have the instruments, the instruments necessary to advance science in so many different fields,” Huang said.

“The greatest challenges of our time, unimaginable challenges to overcome in the past, all of a sudden seem possible to tackle.”

In the afternoon, Huang will participate in a fireside chat with the university’s Council Chairman Harry Sham, teachers and students.

© Thomson Reuters 2024

Posted on

Bluesky Confirms It Will Not Train Its Generative AI Models on User Posts

Bluesky recently announced that it does not train its generative artificial intelligence (AI) models on user data. The social media platform also highlighted the areas where it uses AI tools and claimed that none of the models have been trained on the public and private posts made by users. The statement was released after several creators and users raised concerns about the platform’s privacy policy around AI. Notably, Bluesky recently crossed the 17 million registered users mark after one million users joined the platform in a single day last week.

Bluesky Says It Does Not Train AI on User Posts

In a post on the platform, Bluesky announced its stance on AI and user data. “We do not use any of your content to train generative AI, and have no intention of doing so,” the post said, adding that it was issued after several artists and creators on the platform raised concerns over the platform’s AI policy.

In a separate post, Bluesky also listed the areas where it uses generative AI tools. The company uses AI internally to assist in content moderation system, which is a common practice for social media platforms. Additionally, it also uses AI in its Discover algorithmic feed, through which the platform suggests posts to users based on their activity on the platform.

The Verge reported that while the company might not be using user data to train AI models, third-party firms can still crawl the platform and scrape the data to train their models. Company spokesperson Emily Liu told the publication that Bluesky’s robots.txt files do not stop outside companies from crawling its website for data.

However, the spokesperson highlighted that the issue is currently a topic of discussion within the team and Bluesky is trying to figure out how to ensure that outside organisations respect user consent on the platform.

Notably, on Sunday, Bluesky revealed that one million new users joined the social media platform in a single day. It also highlighted that the platform crossed the milestone of 17 million registered users.

Posted on

TSMC to Suspend Production of Advanced AI Chips for China From November 11: Report

Taiwan Semiconductor Manufacturing Co (TSMC) has notified Chinese chip design companies that it is suspending production of their most advanced AI chips from Monday, the Financial Times reported, citing three people familiar with the matter.

TSMC, the world’s largest contract chipmaker, told Chinese customers it would no longer manufacture AI chips at advanced process nodes of 7 nanometres or smaller, FT said on Friday.

The U.S. has imposed a raft of measures aimed at restricting the shipment of advanced GPU chips – which enable AI – to China to hobble its artificial intelligence capabilities, which Washington fears could be used to develop bioweapons and launch large-scale cyberattacks.

Earlier this month, the U.S. imposed a $500,000 penalty on New York-based GlobalFoundries for shipping chips without authorization to an affiliate of blacklisted Chinese chipmaker SMIC.

Any future supplies of the advanced AI chips by TSMC to Chinese customers would be subject to an approval process likely to involve Washington, according to the FT report.

“TSMC does not comment on market rumour. TSMC is a law-abiding company and we are committed to complying with all applicable rules and regulations, including applicable export controls,” the company said.

The U.S. Department of Commerce did not immediately respond to a Reuters request for comment.

The move to restrict exports to China comes at a time when the U.S. Department of Commerce is investigating how a chip produced by the Taiwanese chipmaker ended up in a product made by China’s heavily sanctioned Huawei.

© Thomson Reuters 2024

Posted on

Google Introduces Secure AI Framework, Shares Best Practices to Deploy AI Models Safely


Google

introduced
a
new
tool
to
share
its
best
practices
for
deploying
artificial
intelligence
(AI)
models
on
Thursday.
Last
year,
the
Mountain
View-based
tech
giant
announced
the
Secure
AI
Framework
(SAIF),
a
guideline
for
not
only
the
company
but
also
other
enterprises
building
large
language
models
(LLMs).
Now,
the
tech
giant
has
introduced
the
SAIF
tool
that
can
generate
a
checklist
with
actionable
insight
to
improve
the
safety
of
the
AI
model.
Notably,
the
tool
is
a
questionnaire-based
tool,
where
developers
and
enterprises
will
have
to
answer
a
series
of
questions
before
receiving
the
checklist.

In
a

blog
post
,
the
Mountain
View-based
tech
giant
highlighted
that
it
has
rolled
out
a
new
tool
that
will
help
others
in
the
AI
industry
learn
from
Google’s
best
practices
in
deploying
AI
models.
Large
language
models
are
capable
of
a
wide
range
of
harmful
impacts,
from
generating
inappropriate
and
indecent
text,
deepfakes,
and
misinformation,
to
generating
harmful
information
including
Chemical,
biological,
radiological,
and
nuclear
(CBRN)
weapons.

Even
if
an
AI
model
is
secure
enough,
there
is
a
risk
that
bad
actors
can
jailbreak
the
AI
model
to
make
it
respond
to
commands
it
was
not
designed
to.
With
such
high
risks,
developers
and
AI
firms
must
take
enough
precautions
to
ensure
the
models
are
safe
for
the
users
as
well
as
secure
enough.
Questions
cover
topics
like
training,
tuning
and
evaluation
of
models,
access
controls
to
models
and
data
sets,
preventing
attacks
and
harmful
inputs,
and
generative
AI-powered
agents,
and
more.

Google’s
SAIF
tool
offers
a
questionnaire-based
format,
which
can
be
accessed

here
.
Developers
and
enterprises
are
required
to
answer
questions
such
as,
“Are
you
able
to
detect,
remove,
and
remediate
malicious
or
accidental
changes
in
your
training,
tuning,
or
evaluation
data?”.
After
completing
the
questionnaire,
users
will
get
a
customised
checklist
that
they
need
to
follow
in
order
to
fill
the
gaps
in
securing
the
AI
model.

The
tool
is
capable
of
handling
risks
such
as
data
poisoning,
prompt
injection,
model
source
tampering,
and
others.
Each
of
these
risks
is
identified
in
the
questionnaire
and
the
tool
offers
a
specific
solution
to
the
problem.

Alongside,
Google
also
announced
adding
35
industry
partners
to
its
Coalition
for
Secure
AI
(CoSAI).
The
group
will
jointly
create
AI
security
solutions
in
three
focus
areas

Software
Supply
Chain
Security
for
AI
Systems,
Preparing
Defenders
for
a
Changing
Cybersecurity
Landscape
and
AI
Risk
Governance.

Posted on

Microsoft, OpenAI Are Spending Millions on News Outlets to Let Them Try Out AI Tools


Microsoft

and

OpenAI
,
in
collaboration
with
the
Lenfest
Institute
for
Journalism,
announced
an
AI
Collaborative
and
Fellowship
programme
on
Tuesday.
With
this
programme,
the
two
tech
giants
will
spend
upwards
of
$10
million
(roughly
Rs.
84.07
crores)
in
direct
funding
as
well
as
enterprise
credits
to
use
proprietary
software.
The
companies
highlighted
that
the
programme
was
aimed
at
increasing
the
adaptation
of
artificial
intelligence
(AI)
in
newsrooms.
As
many
as
five
news
outlets
have
been
announced
as
the
beneficiary
of
this
fellowship
programme.

Microsoft,
OpenAI
to
Fund
News
Outlets

In
a

blog
post
, OpenAI
announced
the
fellowship
programme.
The
AI
firm
highlighted
that
it
is
partnering
with
Microsoft
and
the
Lenfest
Institute
of
Journalism
to
“help
newsrooms
explore
and
implement
ways
in
which
artificial
intelligence
can
help
drive
business
sustainability
and
innovation
in
local
journalism”.
The
funding
initiative,
titled
Lenfest
Institute
AI
Collaborative
and
Fellowship
programme,
has
finalised
five
news
outlets
which
will
receive
funding
in
the
initial
round.

As
per
the
post,
the
selected
news
outlets
include
Chicago
Public
Media,
Newsday
(Long
Island,
NY),
The
Minnesota
Star
Tribune,
The
Philadelphia
Inquirer,
and
The
Seattle
Times.
Each
of
them
will
receive
$2.5
million
(roughly
Rs.
21
crores)
in
direct
funding
and
another
$2.5
million
in
software
and
enterprise
credits,
for
a
total
of
up
to
$10
million.

This
will
be
a
two-year
programme
with
The
Lenfest
Institute’s
Local
Independent
News
Coalition
(LINC)
and
a
group
of
eight
metropolitan
news
organisations
in
the
US.
During
this
period,
the
news
organisations
will
collaborate
with
each
other
as
well
as
the
larger
industry
ecosystem
to
“share
learnings,
product
developments,
case
studies
and
technical
information
needed
to
help
replicate
their
work
in
other
newsrooms.”
Additionally,
three
more
news
organisations
will
be
awarded
funding
in
the
second
round
of
grants.

The
larger
goal
of
the
fellowship
programme
is
to
help
news
outlets
develop
the
capacity
to
use
AI
for
the
analysis
of
public
data,
build
news
and
visual
archives,
create
new
AI
tools
for
newsrooms,
and
more.
OpenAI
said
that
the
recipients
were
chosen
after
a
comprehensive
application
process.

Posted on

Gemini AI Assistant Could Soon Let Users Make Calls, Send Messages From Lockscreen


Gemini

AI
assistant,
the
recently
added
artificial
intelligence
(AI)
virtual
assistant
for
Android
smartphones,
is
reportedly
getting
new
capabilities.
Ever
since
its
release
earlier
this
year,
one
of
the
major
concern
was
lack
of
integration
with
first-party
and
third-party
apps.
Over
the
months,
the
Mountain
View-based
tech
giant
solved
some
of
the
issues
with
various
extensions
that
support
access
to
different
apps
and
functionalities.
Now,
a
new
report
claims
that
Gemini
on
Android
devices
will
be
able
to
make
calls
and
send
messages
from
the
lock
screen.

Gemini
on
Lock
Screen

According
to
an
Android
Authority

report
,
the
new
Gemini
AI
assistant
features
were
spotted
in
the
Google
app
beta
version
15.42.30.28.arm64.
The
features
are
not
currently
visible
and
were
found
during
the
Android
application
package
(APK)
teardown
process.


gemini lock screen calls Gemini lock screen

Calling
and
messaging
feature
on
lock
screen
via
Gemini

Photo
Credit:
Android
Authority

The
publication
also
shared
a
screenshot
of
the
feature.
Based
on
the
screenshot,
a
new
option
has
reportedly
appeared
in
the


Gemini
on
the
lock
screen

menu
in
Gemini’s
Settings.
This
new
option
is
titled
“Make
calls
and
send
messages
without
unlocking”
followed
by
a
toggle
switch.
Users
can
reportedly
turn
it
on
if
they
wish
to
use
this
functionalities.

Notably,
currently
users
can
make
calls
and
send
messages
even
when
their
devices
are
locked
using
Google
Assistant.
However,
this
new
feature
reportedly
extends
the
capability
to
the
AI-powered
virtual
assistant
as
well.
As
per
the
screenshot,
users
will
still
have
to
unlock
the
device
to
see
incoming
messages
that
contain
personal
content.


gemini new design android authority Gemini floating text field

Redesigned
Gemini
AI
assistant
interface

Photo
Credit:
Android
Authority

Additionally,

Google

is
reportedly
also
improving
the
floating
Gemini
text
field
overlay.
Based
on
another
screenshot
shared,
the
new
interface
is
a
slimmer
text
box
with
two
separate
boxes
that
contain
the
options
“Ask
about
this
page”
and
“Summarise
this
page”.
This
new
design
reportedly
replaces
the
large
floating
box
which
users
currently
get.

Further,
the
publication
claimed
that
the
extensions
page
of
Gemini
AI
assistant
is
also
getting
a
minor
makeover.
Instead
of
showing
all
the
extensions
in
the
same
space,
the
new
design
reportedly
separates
the
extensions
into
different
categories.
Some
of
the
categories
are
said
to
be
Communication,
Device
Control,
Travel,
Media,
and
Productivity.
It
is
currently
not
known
when
these
features
might
be
rolled
out
to
users.