Posted on

OpenAI Sued by Canadian News Companies Over Alleged Copyright Breaches

Five Canadian news media companies filed a legal action on Friday against ChatGPT owner OpenAI, accusing the artificial-intelligence company of regularly breaching copyright and online terms of use.

The case is part of a wave of lawsuits against OpenAI and other tech companies by authors, visual artists, music publishers and other copyright owners over data used to train generative AI systems. Microsoft is OpenAI’s major backer.

In a statement, Torstar, Postmedia, The Globe and Mail, The Canadian Press, and CBC/Radio-Canada said OpenAI was scraping large swaths of content to develop its products without getting permission or compensating content owners.

“Journalism is in the public interest. OpenAI using other companies’ journalism for their own commercial gain is not. It’s illegal,” they said.

A New York federal judge dismissed a lawsuit on Nov. 7 against OpenAI that claimed it misused articles from news outlets Raw Story and AlterNet.

In an 84-page statement of claim filed in Ontario’s superior court of justice, the five Canadian companies demanded damages from OpenAI and a permanent injunction preventing it from using their material without consent.

“Rather than seek to obtain the information legally, OpenAI has elected to brazenly misappropriate the News Media Companies’ valuable intellectual property and convert it for its own uses, including commercial uses, without consent or consideration,” they said in the filing.

“The News Media Companies have never received from OpenAI any form of consideration, including payment, in exchange for OpenAI’s use of their Works.”

In response, OpenAI said its models were trained on publicly available data, grounded in fair use and related international copyright principles that were fair for creators.

“We collaborate closely with news publishers, including in the display, attribution and links to their content in ChatGPT search, and offer them easy ways to opt out should they so desire,” a spokesperson said via email.

The Canadian news companies’ document did not mention Microsoft. This month, billionaire Elon Musk expanded a lawsuit against OpenAI to include Microsoft, alleging the two companies illegally sought to monopolize the market for generative AI and sideline competitors.

© Thomson Reuters 2024

Posted on

Nvidia CEO Jensen Huang Says ‘The Age of AI Has Started’

Nvidia CEO Jensen Huang said on Saturday that global collaboration and cooperation in technology will continue, even if the incoming U.S. administration imposes stricter export controls on advanced computing products.

President-elect Donald Trump, in his first term in office, imposed a series of restrictions on the sale of U.S. technology to China citing national security concerns – a policy broadly continued under incumbent President Joe Biden.

“Open science in global collaboration, cooperation across math and science has been around for a very long time. It is the foundation of social advancement and scientific advancement,” Huang told media during a visit to Hong Kong.

Global cooperation is “going to continue. I don’t know what’s going to happen in the new administration, but whatever happens, we’ll balance simultaneously compliance with laws and policies, continue to advance our technology and support and serve customers all over the world.”

Earlier on Saturday Huang told graduates and academics at the Hong Kong University of Science and Technology that “the age of AI has started” in a speech after receiving an honorary doctorate degree in engineering.

The head of the world’s leading maker of chips used for artificial intelligence applications received the award alongside actor Tony Leung, Nobel Prize for Chemistry winner Prof. Michael Levitt and Fields Medallist Prof. David Mumford.

“The age of AI has started. A new computing era that will impact every industry and every field of science,” said Huang.

He said Nvidia has “reinvented computing and sparked a new industrial revolution,” 25 years after inventing the graphics processing unit.

“AI is certainly the most important technology of our time, and potentially of all times.”

Huang, 61, also told graduates that he wished he had started his career at this time.

“The whole world is reset. You’re at the starting lines with everybody else. An industry is being reinvented. You now have the instruments, the instruments necessary to advance science in so many different fields,” Huang said.

“The greatest challenges of our time, unimaginable challenges to overcome in the past, all of a sudden seem possible to tackle.”

In the afternoon, Huang will participate in a fireside chat with the university’s Council Chairman Harry Sham, teachers and students.

© Thomson Reuters 2024

Posted on

Bluesky Confirms It Will Not Train Its Generative AI Models on User Posts

Bluesky recently announced that it does not train its generative artificial intelligence (AI) models on user data. The social media platform also highlighted the areas where it uses AI tools and claimed that none of the models have been trained on the public and private posts made by users. The statement was released after several creators and users raised concerns about the platform’s privacy policy around AI. Notably, Bluesky recently crossed the 17 million registered users mark after one million users joined the platform in a single day last week.

Bluesky Says It Does Not Train AI on User Posts

In a post on the platform, Bluesky announced its stance on AI and user data. “We do not use any of your content to train generative AI, and have no intention of doing so,” the post said, adding that it was issued after several artists and creators on the platform raised concerns over the platform’s AI policy.

In a separate post, Bluesky also listed the areas where it uses generative AI tools. The company uses AI internally to assist in content moderation system, which is a common practice for social media platforms. Additionally, it also uses AI in its Discover algorithmic feed, through which the platform suggests posts to users based on their activity on the platform.

The Verge reported that while the company might not be using user data to train AI models, third-party firms can still crawl the platform and scrape the data to train their models. Company spokesperson Emily Liu told the publication that Bluesky’s robots.txt files do not stop outside companies from crawling its website for data.

However, the spokesperson highlighted that the issue is currently a topic of discussion within the team and Bluesky is trying to figure out how to ensure that outside organisations respect user consent on the platform.

Notably, on Sunday, Bluesky revealed that one million new users joined the social media platform in a single day. It also highlighted that the platform crossed the milestone of 17 million registered users.

Posted on

TSMC to Suspend Production of Advanced AI Chips for China From November 11: Report

Taiwan Semiconductor Manufacturing Co (TSMC) has notified Chinese chip design companies that it is suspending production of their most advanced AI chips from Monday, the Financial Times reported, citing three people familiar with the matter.

TSMC, the world’s largest contract chipmaker, told Chinese customers it would no longer manufacture AI chips at advanced process nodes of 7 nanometres or smaller, FT said on Friday.

The U.S. has imposed a raft of measures aimed at restricting the shipment of advanced GPU chips – which enable AI – to China to hobble its artificial intelligence capabilities, which Washington fears could be used to develop bioweapons and launch large-scale cyberattacks.

Earlier this month, the U.S. imposed a $500,000 penalty on New York-based GlobalFoundries for shipping chips without authorization to an affiliate of blacklisted Chinese chipmaker SMIC.

Any future supplies of the advanced AI chips by TSMC to Chinese customers would be subject to an approval process likely to involve Washington, according to the FT report.

“TSMC does not comment on market rumour. TSMC is a law-abiding company and we are committed to complying with all applicable rules and regulations, including applicable export controls,” the company said.

The U.S. Department of Commerce did not immediately respond to a Reuters request for comment.

The move to restrict exports to China comes at a time when the U.S. Department of Commerce is investigating how a chip produced by the Taiwanese chipmaker ended up in a product made by China’s heavily sanctioned Huawei.

© Thomson Reuters 2024

Posted on

Google Introduces Secure AI Framework, Shares Best Practices to Deploy AI Models Safely


Google

introduced
a
new
tool
to
share
its
best
practices
for
deploying
artificial
intelligence
(AI)
models
on
Thursday.
Last
year,
the
Mountain
View-based
tech
giant
announced
the
Secure
AI
Framework
(SAIF),
a
guideline
for
not
only
the
company
but
also
other
enterprises
building
large
language
models
(LLMs).
Now,
the
tech
giant
has
introduced
the
SAIF
tool
that
can
generate
a
checklist
with
actionable
insight
to
improve
the
safety
of
the
AI
model.
Notably,
the
tool
is
a
questionnaire-based
tool,
where
developers
and
enterprises
will
have
to
answer
a
series
of
questions
before
receiving
the
checklist.

In
a

blog
post
,
the
Mountain
View-based
tech
giant
highlighted
that
it
has
rolled
out
a
new
tool
that
will
help
others
in
the
AI
industry
learn
from
Google’s
best
practices
in
deploying
AI
models.
Large
language
models
are
capable
of
a
wide
range
of
harmful
impacts,
from
generating
inappropriate
and
indecent
text,
deepfakes,
and
misinformation,
to
generating
harmful
information
including
Chemical,
biological,
radiological,
and
nuclear
(CBRN)
weapons.

Even
if
an
AI
model
is
secure
enough,
there
is
a
risk
that
bad
actors
can
jailbreak
the
AI
model
to
make
it
respond
to
commands
it
was
not
designed
to.
With
such
high
risks,
developers
and
AI
firms
must
take
enough
precautions
to
ensure
the
models
are
safe
for
the
users
as
well
as
secure
enough.
Questions
cover
topics
like
training,
tuning
and
evaluation
of
models,
access
controls
to
models
and
data
sets,
preventing
attacks
and
harmful
inputs,
and
generative
AI-powered
agents,
and
more.

Google’s
SAIF
tool
offers
a
questionnaire-based
format,
which
can
be
accessed

here
.
Developers
and
enterprises
are
required
to
answer
questions
such
as,
“Are
you
able
to
detect,
remove,
and
remediate
malicious
or
accidental
changes
in
your
training,
tuning,
or
evaluation
data?”.
After
completing
the
questionnaire,
users
will
get
a
customised
checklist
that
they
need
to
follow
in
order
to
fill
the
gaps
in
securing
the
AI
model.

The
tool
is
capable
of
handling
risks
such
as
data
poisoning,
prompt
injection,
model
source
tampering,
and
others.
Each
of
these
risks
is
identified
in
the
questionnaire
and
the
tool
offers
a
specific
solution
to
the
problem.

Alongside,
Google
also
announced
adding
35
industry
partners
to
its
Coalition
for
Secure
AI
(CoSAI).
The
group
will
jointly
create
AI
security
solutions
in
three
focus
areas

Software
Supply
Chain
Security
for
AI
Systems,
Preparing
Defenders
for
a
Changing
Cybersecurity
Landscape
and
AI
Risk
Governance.

Posted on

Microsoft, OpenAI Are Spending Millions on News Outlets to Let Them Try Out AI Tools


Microsoft

and

OpenAI
,
in
collaboration
with
the
Lenfest
Institute
for
Journalism,
announced
an
AI
Collaborative
and
Fellowship
programme
on
Tuesday.
With
this
programme,
the
two
tech
giants
will
spend
upwards
of
$10
million
(roughly
Rs.
84.07
crores)
in
direct
funding
as
well
as
enterprise
credits
to
use
proprietary
software.
The
companies
highlighted
that
the
programme
was
aimed
at
increasing
the
adaptation
of
artificial
intelligence
(AI)
in
newsrooms.
As
many
as
five
news
outlets
have
been
announced
as
the
beneficiary
of
this
fellowship
programme.

Microsoft,
OpenAI
to
Fund
News
Outlets

In
a

blog
post
, OpenAI
announced
the
fellowship
programme.
The
AI
firm
highlighted
that
it
is
partnering
with
Microsoft
and
the
Lenfest
Institute
of
Journalism
to
“help
newsrooms
explore
and
implement
ways
in
which
artificial
intelligence
can
help
drive
business
sustainability
and
innovation
in
local
journalism”.
The
funding
initiative,
titled
Lenfest
Institute
AI
Collaborative
and
Fellowship
programme,
has
finalised
five
news
outlets
which
will
receive
funding
in
the
initial
round.

As
per
the
post,
the
selected
news
outlets
include
Chicago
Public
Media,
Newsday
(Long
Island,
NY),
The
Minnesota
Star
Tribune,
The
Philadelphia
Inquirer,
and
The
Seattle
Times.
Each
of
them
will
receive
$2.5
million
(roughly
Rs.
21
crores)
in
direct
funding
and
another
$2.5
million
in
software
and
enterprise
credits,
for
a
total
of
up
to
$10
million.

This
will
be
a
two-year
programme
with
The
Lenfest
Institute’s
Local
Independent
News
Coalition
(LINC)
and
a
group
of
eight
metropolitan
news
organisations
in
the
US.
During
this
period,
the
news
organisations
will
collaborate
with
each
other
as
well
as
the
larger
industry
ecosystem
to
“share
learnings,
product
developments,
case
studies
and
technical
information
needed
to
help
replicate
their
work
in
other
newsrooms.”
Additionally,
three
more
news
organisations
will
be
awarded
funding
in
the
second
round
of
grants.

The
larger
goal
of
the
fellowship
programme
is
to
help
news
outlets
develop
the
capacity
to
use
AI
for
the
analysis
of
public
data,
build
news
and
visual
archives,
create
new
AI
tools
for
newsrooms,
and
more.
OpenAI
said
that
the
recipients
were
chosen
after
a
comprehensive
application
process.

Posted on

Gemini AI Assistant Could Soon Let Users Make Calls, Send Messages From Lockscreen


Gemini

AI
assistant,
the
recently
added
artificial
intelligence
(AI)
virtual
assistant
for
Android
smartphones,
is
reportedly
getting
new
capabilities.
Ever
since
its
release
earlier
this
year,
one
of
the
major
concern
was
lack
of
integration
with
first-party
and
third-party
apps.
Over
the
months,
the
Mountain
View-based
tech
giant
solved
some
of
the
issues
with
various
extensions
that
support
access
to
different
apps
and
functionalities.
Now,
a
new
report
claims
that
Gemini
on
Android
devices
will
be
able
to
make
calls
and
send
messages
from
the
lock
screen.

Gemini
on
Lock
Screen

According
to
an
Android
Authority

report
,
the
new
Gemini
AI
assistant
features
were
spotted
in
the
Google
app
beta
version
15.42.30.28.arm64.
The
features
are
not
currently
visible
and
were
found
during
the
Android
application
package
(APK)
teardown
process.


gemini lock screen calls Gemini lock screen

Calling
and
messaging
feature
on
lock
screen
via
Gemini

Photo
Credit:
Android
Authority

The
publication
also
shared
a
screenshot
of
the
feature.
Based
on
the
screenshot,
a
new
option
has
reportedly
appeared
in
the


Gemini
on
the
lock
screen

menu
in
Gemini’s
Settings.
This
new
option
is
titled
“Make
calls
and
send
messages
without
unlocking”
followed
by
a
toggle
switch.
Users
can
reportedly
turn
it
on
if
they
wish
to
use
this
functionalities.

Notably,
currently
users
can
make
calls
and
send
messages
even
when
their
devices
are
locked
using
Google
Assistant.
However,
this
new
feature
reportedly
extends
the
capability
to
the
AI-powered
virtual
assistant
as
well.
As
per
the
screenshot,
users
will
still
have
to
unlock
the
device
to
see
incoming
messages
that
contain
personal
content.


gemini new design android authority Gemini floating text field

Redesigned
Gemini
AI
assistant
interface

Photo
Credit:
Android
Authority

Additionally,

Google

is
reportedly
also
improving
the
floating
Gemini
text
field
overlay.
Based
on
another
screenshot
shared,
the
new
interface
is
a
slimmer
text
box
with
two
separate
boxes
that
contain
the
options
“Ask
about
this
page”
and
“Summarise
this
page”.
This
new
design
reportedly
replaces
the
large
floating
box
which
users
currently
get.

Further,
the
publication
claimed
that
the
extensions
page
of
Gemini
AI
assistant
is
also
getting
a
minor
makeover.
Instead
of
showing
all
the
extensions
in
the
same
space,
the
new
design
reportedly
separates
the
extensions
into
different
categories.
Some
of
the
categories
are
said
to
be
Communication,
Device
Control,
Travel,
Media,
and
Productivity.
It
is
currently
not
known
when
these
features
might
be
rolled
out
to
users.

Posted on

Zoom AI Companion 2.0 With New Capabilities, Custom AI Avatars for Zoom Clips Introduced


Zoom

announced
new
artificial
intelligence
(AI)
features
for
its
platform
at
its
annual
Zoomtopia
event
on
Wednesday.
The
video
conferencing
platform
introduced
the
AI
Companion
2.0,
the
second
generation
of
the
AI
assistant
which
can
now
handle
more
tasks.
Available
across
Zoom
Workplace,
the
company’s
AI-powered
collaboration
platform,
it
can
now
access
Zoom
Mail,
Zoom
Tasks,
and
more.
Additionally,
the
company
also
launched
custom
AI
avatar,
which
will
let
users
record
a
video
to
generate
an
AI
clone
of
themselves
which
can
speak
in
a
similar
voice.

Zoom
AI
Companion
2.0
Introduced

In
a
newsroom

post
,
the
company
introduced
the
new
AI
assistant
for
the
Zoom
Workplace
platform.
It
is
similar
to
the
Gemini
assistant
in
Android
smartphones
or
Copilot
in
AI
PCs.
The
AI
Companion
is
the
central
hub
for
most
of
the
AI
features
added
by
the
company.
Users
can
use
it
to
summarise
documents,
generate
text,
and
more.

With
the
second
iteration
of
the
AI
assistant,
Workplace
users
can
access
prompt
suggestions
in
the
side
panel
across
all
Zoom
platforms.
Users
can
also
expand
the
context
of
information
by
connecting

Gmail
,
Microsoft
Outlook,
and
other
similar
apps.
The
assistant
can
also
summarise
unread
messages
within
a
Zoom
Team
Chat
channel
and
recap
email
threads
in
Zoom
Mail.

Within
Zoom
calls,
the
AI
assistant
can
now
answer
questions
outside
of
the
context
of
the
meeting
by
searching
the
web.
Hence,
participants
can
ask
general
knowledge
and
current
affairs
queries
to
the
AI
while
in
the
meeting.
AI
Companion
2.0
can
also
answer
questions
about
the
content
discussed
in
a
meeting
even
after
the
meeting
has
ended.
Further,
the
conversations
with
the
AI
are
now
available
to
refer
back
to
after
the
end
of
the
meeting.

Zoom
Introduces
Custom
AI
Avatar

Another
interesting
feature
unveiled
at
the
event
is
Custom
avatars
for
Zoom
Clips.
It
lets
users
record
a
video
clip,
which
is
then
processed
by
an
AI
model
to
generate
an
avatar.
Only
the
bust
of
the
avatar
is
visible
including
the
head,
shoulders,
and
upper
arms.
The
feature
can
also
generate
a
voice
similar
to
the
user.
If
a
user
adds
a
text
script,
the
AI
avatar
will
be
able
to
speak
and
lip
sync
to
it.

While
this
feature
was
showcased
at
the
Zoomtopia
event,
it
will
not
be
available
to
users
till
2025.

Posted on

Adobe Content Authenticity Web App Introduced; Will Let Creators Add AI Label to Content

Adobe
Content
Authenticity,
a
free
web
app
that
allows
users
to
easily
add
content
credentials
as
well
as
artificial
intelligence
(AI)
labels,
was
introduced
on
Tuesday.
The
platform
is
aimed
at
helping
creators
with
their
attribution
needs.
It
works
on
images,
videos,
and
audio
files
and
is
integrated
with
all
of
the

Adobe

Creative
Cloud
apps.
Alongside
adding
attribution,
creators
can
also
use
the
platform
to
opt
out
of
training
AI
models
using
their
content.
It
is
currently
available
as
a

Google
Chrome

extension
in
beta.

Adobe
Content
Authenticity
Web
App
Introduced

In
a
newsroom

post
,
Adobe
detailed
the
new
platform.
Notably,
while
it
is
available
as
a
Chrome
extension
currently,
a
free
web
app
will
be
available
in
public
beta
in
the
first
quarter
of
2025.
Users
can
sign
up

here

to
be
notified
when
the
beta
is
available
to
download.
The
company
highlighted
that
the
platform
is
aimed
at
“helping
creators
protect
their
work
from
misuse
or
misrepresentation
and
build
a
more
trustworthy
and
transparent
digital
ecosystem
for
everyone.”

The
app
will
act
as
a
one-stop
shop
for
all
the
attribution
needs
of
creators.
They
can
use
it
to
add
Content
Credentials,
which
is
the
information
added
to
a
file’s
metadata
highlighting
details
about
its
creator.
The
app
can
be
used
to
add
these
attributions
to
a
batch
of
files.
Creators
can
also
choose
the
information
they
want
to
share
and
it
can
include
their
name,
website,
and
social
media
accounts.

Adobe
said
that
Content
Credentials
can
protect
creators
from
unauthorised
use
or
misattribution
of
their
work.
Interestingly,
while
the
web
app
supports
all
the
Adobe
Creative
Cloud
apps,
content
not
created
on
its
platform
can
also
be
attributed.
This
goes
for
images,
videos,
and
audio
files.

Apart
from
attribution,
the
web
app
will
also
let
users
mark
if
they
do
not
want
their
content
to
be
used
by
or
to
train
AI
models.
The
company
highlighted
that
it
only
trains
Adobe
Firefly,
the
in-house
family
of
generative
AI
models,
on
content
which
is
either
publicly
available
or
has
permission
to
use.
However,
adding
the
AI
label
will
also
protect
the
creator
from
other
AI
models
in
the
market.

However,
that
will
only
work
if
other
companies
decide
to
respect
Content
Credentials.
Currently,
only
Spawning,
the
opt-out
aggregator
of
generative
AI,
has
committed
to
recognise
this
attribution.
Adobe
said
it
is
actively
working
to
drive
an
industry-wide
adoption
of
this
preference.
Unfortunately,
there
is
a
downside.
If
a
creator
does
not
allow
their
work
to
be
used
for
AI
training,
the
content
will
not
be
eligible
for
Adobe
Stock.

For
the
latest

tech
news

and

reviews
,
follow
Gadgets
360
on

X
,

Facebook
,

WhatsApp
,

Threads

and

Google
News
.
For
the
latest
videos
on
gadgets
and
tech,
subscribe
to
our

YouTube
channel
.
If
you
want
to
know
everything
about
top
influencers,
follow
our
in-house

Who’sThat360

on

Instagram

and

YouTube
.

Portronics
Pico
13
Portable
Projector
With
4K
Resolution,
Rechargeable
Battery
Launched
in
India

Posted on

Apple Releases Depth Pro, an Open Source Monocular Depth Estimation AI Model


Apple

has

released

several
open-source
artificial
intelligence
(AI)
models
this
year.
These
are
mostly
small
language
models
designed
for
a
specific
task.
Adding
to
the
list,
the
Cupertino-based
tech
giant
has
now
released
a
new
AI
model
dubbed
Depth
Pro.
It
is
a
vision
model
that
can
generate
monocular
depth
maps
of
any
image.
This
technology
is
useful
in
the
generation
of
3D
textures,
augmented
reality
(AR),
and
more.
The
researchers
behind
the
project
claim
that
the
depth
maps
generated
by
AI
are
better
than
the
ones
generated
with
the
help
of
multiple
cameras.

Apple
Releases
Depth
Pro
AI
Model

Depth
estimation
is
an
important
process
in
3D
modelling
as
well
as
various
other
technologies
such
as
AR,
autonomous
driving
systems,
robotics,
and
more.
The
human
eye
is
a
complex
lens
system
that
can
accurately
gauge
the
depth
of
objects
even
while
observing
them
from
a
single-point
perspective.
However,
cameras
are
not
that
good
at
it.
Images
taken
with
a
single
camera
make
it
appear
two-dimensional,
removing
depth
from
the
equation.

So,
for
technologies
where
the
depth
of
an
object
plays
an
important
role,
multiple
cameras
are
used.
However,
modelling
objects
like
this
can
be
time-consuming
and
resource-intensive.
Instead,
in
a

research
paper

titled
“Depth
Pro:
Sharp
Monocular
Metric
Depth
in
Less
Than
a
Second”,
Apple
highlighted
how
it
used
a
vision-based
AI
model
to
generate
zero-shot
depth
maps
of
monocular
images
of
objects.


apple depth pro github Apple Depth Pro

How
the
Depth
Pro
AI
model
generates
depth
maps

Photo
Credit:
Apple

To
develop
the
AI
model,
the
researchers
used
the
Vision
Transformer-based
(ViT)
architecture.
The
output
resolution
of
384
x
384
was
picked,
but
the
input
and
processing
resolution
was
kept
at
1536
x
1536,
allowing
the
AI
model
more
space
to
understand
the
details.

In
the
pre-print
version
of
the
paper,
which
is
currently
published
in
the
online
journal
arXiv,
the
researchers
claimed
that
the
AI
model
can
now
accurately
generate
depth
maps
of
visually
complex
objects
such
as
a
cage,
a
furry
cat’s
body
and
whiskers,
and
more.
The
generation
time
is
said
to
be
one
second.
The
weights
of
the
open-source
AI
model
are
currently
being
hosted
on
a
GitHub

listing
.
Interested
individuals
can
run
the
model
on
the
inference
of
a
single
GPU.