Posted on

Bluesky Confirms It Will Not Train Its Generative AI Models on User Posts

Bluesky recently announced that it does not train its generative artificial intelligence (AI) models on user data. The social media platform also highlighted the areas where it uses AI tools and claimed that none of the models have been trained on the public and private posts made by users. The statement was released after several creators and users raised concerns about the platform’s privacy policy around AI. Notably, Bluesky recently crossed the 17 million registered users mark after one million users joined the platform in a single day last week.

Bluesky Says It Does Not Train AI on User Posts

In a post on the platform, Bluesky announced its stance on AI and user data. “We do not use any of your content to train generative AI, and have no intention of doing so,” the post said, adding that it was issued after several artists and creators on the platform raised concerns over the platform’s AI policy.

In a separate post, Bluesky also listed the areas where it uses generative AI tools. The company uses AI internally to assist in content moderation system, which is a common practice for social media platforms. Additionally, it also uses AI in its Discover algorithmic feed, through which the platform suggests posts to users based on their activity on the platform.

The Verge reported that while the company might not be using user data to train AI models, third-party firms can still crawl the platform and scrape the data to train their models. Company spokesperson Emily Liu told the publication that Bluesky’s robots.txt files do not stop outside companies from crawling its website for data.

However, the spokesperson highlighted that the issue is currently a topic of discussion within the team and Bluesky is trying to figure out how to ensure that outside organisations respect user consent on the platform.

Notably, on Sunday, Bluesky revealed that one million new users joined the social media platform in a single day. It also highlighted that the platform crossed the milestone of 17 million registered users.

Posted on

TSMC to Suspend Production of Advanced AI Chips for China From November 11: Report

Taiwan Semiconductor Manufacturing Co (TSMC) has notified Chinese chip design companies that it is suspending production of their most advanced AI chips from Monday, the Financial Times reported, citing three people familiar with the matter.

TSMC, the world’s largest contract chipmaker, told Chinese customers it would no longer manufacture AI chips at advanced process nodes of 7 nanometres or smaller, FT said on Friday.

The U.S. has imposed a raft of measures aimed at restricting the shipment of advanced GPU chips – which enable AI – to China to hobble its artificial intelligence capabilities, which Washington fears could be used to develop bioweapons and launch large-scale cyberattacks.

Earlier this month, the U.S. imposed a $500,000 penalty on New York-based GlobalFoundries for shipping chips without authorization to an affiliate of blacklisted Chinese chipmaker SMIC.

Any future supplies of the advanced AI chips by TSMC to Chinese customers would be subject to an approval process likely to involve Washington, according to the FT report.

“TSMC does not comment on market rumour. TSMC is a law-abiding company and we are committed to complying with all applicable rules and regulations, including applicable export controls,” the company said.

The U.S. Department of Commerce did not immediately respond to a Reuters request for comment.

The move to restrict exports to China comes at a time when the U.S. Department of Commerce is investigating how a chip produced by the Taiwanese chipmaker ended up in a product made by China’s heavily sanctioned Huawei.

© Thomson Reuters 2024

Posted on

Google Introduces Secure AI Framework, Shares Best Practices to Deploy AI Models Safely


Google

introduced
a
new
tool
to
share
its
best
practices
for
deploying
artificial
intelligence
(AI)
models
on
Thursday.
Last
year,
the
Mountain
View-based
tech
giant
announced
the
Secure
AI
Framework
(SAIF),
a
guideline
for
not
only
the
company
but
also
other
enterprises
building
large
language
models
(LLMs).
Now,
the
tech
giant
has
introduced
the
SAIF
tool
that
can
generate
a
checklist
with
actionable
insight
to
improve
the
safety
of
the
AI
model.
Notably,
the
tool
is
a
questionnaire-based
tool,
where
developers
and
enterprises
will
have
to
answer
a
series
of
questions
before
receiving
the
checklist.

In
a

blog
post
,
the
Mountain
View-based
tech
giant
highlighted
that
it
has
rolled
out
a
new
tool
that
will
help
others
in
the
AI
industry
learn
from
Google’s
best
practices
in
deploying
AI
models.
Large
language
models
are
capable
of
a
wide
range
of
harmful
impacts,
from
generating
inappropriate
and
indecent
text,
deepfakes,
and
misinformation,
to
generating
harmful
information
including
Chemical,
biological,
radiological,
and
nuclear
(CBRN)
weapons.

Even
if
an
AI
model
is
secure
enough,
there
is
a
risk
that
bad
actors
can
jailbreak
the
AI
model
to
make
it
respond
to
commands
it
was
not
designed
to.
With
such
high
risks,
developers
and
AI
firms
must
take
enough
precautions
to
ensure
the
models
are
safe
for
the
users
as
well
as
secure
enough.
Questions
cover
topics
like
training,
tuning
and
evaluation
of
models,
access
controls
to
models
and
data
sets,
preventing
attacks
and
harmful
inputs,
and
generative
AI-powered
agents,
and
more.

Google’s
SAIF
tool
offers
a
questionnaire-based
format,
which
can
be
accessed

here
.
Developers
and
enterprises
are
required
to
answer
questions
such
as,
“Are
you
able
to
detect,
remove,
and
remediate
malicious
or
accidental
changes
in
your
training,
tuning,
or
evaluation
data?”.
After
completing
the
questionnaire,
users
will
get
a
customised
checklist
that
they
need
to
follow
in
order
to
fill
the
gaps
in
securing
the
AI
model.

The
tool
is
capable
of
handling
risks
such
as
data
poisoning,
prompt
injection,
model
source
tampering,
and
others.
Each
of
these
risks
is
identified
in
the
questionnaire
and
the
tool
offers
a
specific
solution
to
the
problem.

Alongside,
Google
also
announced
adding
35
industry
partners
to
its
Coalition
for
Secure
AI
(CoSAI).
The
group
will
jointly
create
AI
security
solutions
in
three
focus
areas

Software
Supply
Chain
Security
for
AI
Systems,
Preparing
Defenders
for
a
Changing
Cybersecurity
Landscape
and
AI
Risk
Governance.

Posted on

Microsoft, OpenAI Are Spending Millions on News Outlets to Let Them Try Out AI Tools


Microsoft

and

OpenAI
,
in
collaboration
with
the
Lenfest
Institute
for
Journalism,
announced
an
AI
Collaborative
and
Fellowship
programme
on
Tuesday.
With
this
programme,
the
two
tech
giants
will
spend
upwards
of
$10
million
(roughly
Rs.
84.07
crores)
in
direct
funding
as
well
as
enterprise
credits
to
use
proprietary
software.
The
companies
highlighted
that
the
programme
was
aimed
at
increasing
the
adaptation
of
artificial
intelligence
(AI)
in
newsrooms.
As
many
as
five
news
outlets
have
been
announced
as
the
beneficiary
of
this
fellowship
programme.

Microsoft,
OpenAI
to
Fund
News
Outlets

In
a

blog
post
, OpenAI
announced
the
fellowship
programme.
The
AI
firm
highlighted
that
it
is
partnering
with
Microsoft
and
the
Lenfest
Institute
of
Journalism
to
“help
newsrooms
explore
and
implement
ways
in
which
artificial
intelligence
can
help
drive
business
sustainability
and
innovation
in
local
journalism”.
The
funding
initiative,
titled
Lenfest
Institute
AI
Collaborative
and
Fellowship
programme,
has
finalised
five
news
outlets
which
will
receive
funding
in
the
initial
round.

As
per
the
post,
the
selected
news
outlets
include
Chicago
Public
Media,
Newsday
(Long
Island,
NY),
The
Minnesota
Star
Tribune,
The
Philadelphia
Inquirer,
and
The
Seattle
Times.
Each
of
them
will
receive
$2.5
million
(roughly
Rs.
21
crores)
in
direct
funding
and
another
$2.5
million
in
software
and
enterprise
credits,
for
a
total
of
up
to
$10
million.

This
will
be
a
two-year
programme
with
The
Lenfest
Institute’s
Local
Independent
News
Coalition
(LINC)
and
a
group
of
eight
metropolitan
news
organisations
in
the
US.
During
this
period,
the
news
organisations
will
collaborate
with
each
other
as
well
as
the
larger
industry
ecosystem
to
“share
learnings,
product
developments,
case
studies
and
technical
information
needed
to
help
replicate
their
work
in
other
newsrooms.”
Additionally,
three
more
news
organisations
will
be
awarded
funding
in
the
second
round
of
grants.

The
larger
goal
of
the
fellowship
programme
is
to
help
news
outlets
develop
the
capacity
to
use
AI
for
the
analysis
of
public
data,
build
news
and
visual
archives,
create
new
AI
tools
for
newsrooms,
and
more.
OpenAI
said
that
the
recipients
were
chosen
after
a
comprehensive
application
process.

Posted on

Gemini AI Assistant Could Soon Let Users Make Calls, Send Messages From Lockscreen


Gemini

AI
assistant,
the
recently
added
artificial
intelligence
(AI)
virtual
assistant
for
Android
smartphones,
is
reportedly
getting
new
capabilities.
Ever
since
its
release
earlier
this
year,
one
of
the
major
concern
was
lack
of
integration
with
first-party
and
third-party
apps.
Over
the
months,
the
Mountain
View-based
tech
giant
solved
some
of
the
issues
with
various
extensions
that
support
access
to
different
apps
and
functionalities.
Now,
a
new
report
claims
that
Gemini
on
Android
devices
will
be
able
to
make
calls
and
send
messages
from
the
lock
screen.

Gemini
on
Lock
Screen

According
to
an
Android
Authority

report
,
the
new
Gemini
AI
assistant
features
were
spotted
in
the
Google
app
beta
version
15.42.30.28.arm64.
The
features
are
not
currently
visible
and
were
found
during
the
Android
application
package
(APK)
teardown
process.


gemini lock screen calls Gemini lock screen

Calling
and
messaging
feature
on
lock
screen
via
Gemini

Photo
Credit:
Android
Authority

The
publication
also
shared
a
screenshot
of
the
feature.
Based
on
the
screenshot,
a
new
option
has
reportedly
appeared
in
the


Gemini
on
the
lock
screen

menu
in
Gemini’s
Settings.
This
new
option
is
titled
“Make
calls
and
send
messages
without
unlocking”
followed
by
a
toggle
switch.
Users
can
reportedly
turn
it
on
if
they
wish
to
use
this
functionalities.

Notably,
currently
users
can
make
calls
and
send
messages
even
when
their
devices
are
locked
using
Google
Assistant.
However,
this
new
feature
reportedly
extends
the
capability
to
the
AI-powered
virtual
assistant
as
well.
As
per
the
screenshot,
users
will
still
have
to
unlock
the
device
to
see
incoming
messages
that
contain
personal
content.


gemini new design android authority Gemini floating text field

Redesigned
Gemini
AI
assistant
interface

Photo
Credit:
Android
Authority

Additionally,

Google

is
reportedly
also
improving
the
floating
Gemini
text
field
overlay.
Based
on
another
screenshot
shared,
the
new
interface
is
a
slimmer
text
box
with
two
separate
boxes
that
contain
the
options
“Ask
about
this
page”
and
“Summarise
this
page”.
This
new
design
reportedly
replaces
the
large
floating
box
which
users
currently
get.

Further,
the
publication
claimed
that
the
extensions
page
of
Gemini
AI
assistant
is
also
getting
a
minor
makeover.
Instead
of
showing
all
the
extensions
in
the
same
space,
the
new
design
reportedly
separates
the
extensions
into
different
categories.
Some
of
the
categories
are
said
to
be
Communication,
Device
Control,
Travel,
Media,
and
Productivity.
It
is
currently
not
known
when
these
features
might
be
rolled
out
to
users.

Posted on

Zoom AI Companion 2.0 With New Capabilities, Custom AI Avatars for Zoom Clips Introduced


Zoom

announced
new
artificial
intelligence
(AI)
features
for
its
platform
at
its
annual
Zoomtopia
event
on
Wednesday.
The
video
conferencing
platform
introduced
the
AI
Companion
2.0,
the
second
generation
of
the
AI
assistant
which
can
now
handle
more
tasks.
Available
across
Zoom
Workplace,
the
company’s
AI-powered
collaboration
platform,
it
can
now
access
Zoom
Mail,
Zoom
Tasks,
and
more.
Additionally,
the
company
also
launched
custom
AI
avatar,
which
will
let
users
record
a
video
to
generate
an
AI
clone
of
themselves
which
can
speak
in
a
similar
voice.

Zoom
AI
Companion
2.0
Introduced

In
a
newsroom

post
,
the
company
introduced
the
new
AI
assistant
for
the
Zoom
Workplace
platform.
It
is
similar
to
the
Gemini
assistant
in
Android
smartphones
or
Copilot
in
AI
PCs.
The
AI
Companion
is
the
central
hub
for
most
of
the
AI
features
added
by
the
company.
Users
can
use
it
to
summarise
documents,
generate
text,
and
more.

With
the
second
iteration
of
the
AI
assistant,
Workplace
users
can
access
prompt
suggestions
in
the
side
panel
across
all
Zoom
platforms.
Users
can
also
expand
the
context
of
information
by
connecting

Gmail
,
Microsoft
Outlook,
and
other
similar
apps.
The
assistant
can
also
summarise
unread
messages
within
a
Zoom
Team
Chat
channel
and
recap
email
threads
in
Zoom
Mail.

Within
Zoom
calls,
the
AI
assistant
can
now
answer
questions
outside
of
the
context
of
the
meeting
by
searching
the
web.
Hence,
participants
can
ask
general
knowledge
and
current
affairs
queries
to
the
AI
while
in
the
meeting.
AI
Companion
2.0
can
also
answer
questions
about
the
content
discussed
in
a
meeting
even
after
the
meeting
has
ended.
Further,
the
conversations
with
the
AI
are
now
available
to
refer
back
to
after
the
end
of
the
meeting.

Zoom
Introduces
Custom
AI
Avatar

Another
interesting
feature
unveiled
at
the
event
is
Custom
avatars
for
Zoom
Clips.
It
lets
users
record
a
video
clip,
which
is
then
processed
by
an
AI
model
to
generate
an
avatar.
Only
the
bust
of
the
avatar
is
visible
including
the
head,
shoulders,
and
upper
arms.
The
feature
can
also
generate
a
voice
similar
to
the
user.
If
a
user
adds
a
text
script,
the
AI
avatar
will
be
able
to
speak
and
lip
sync
to
it.

While
this
feature
was
showcased
at
the
Zoomtopia
event,
it
will
not
be
available
to
users
till
2025.

Posted on

Adobe Content Authenticity Web App Introduced; Will Let Creators Add AI Label to Content

Adobe
Content
Authenticity,
a
free
web
app
that
allows
users
to
easily
add
content
credentials
as
well
as
artificial
intelligence
(AI)
labels,
was
introduced
on
Tuesday.
The
platform
is
aimed
at
helping
creators
with
their
attribution
needs.
It
works
on
images,
videos,
and
audio
files
and
is
integrated
with
all
of
the

Adobe

Creative
Cloud
apps.
Alongside
adding
attribution,
creators
can
also
use
the
platform
to
opt
out
of
training
AI
models
using
their
content.
It
is
currently
available
as
a

Google
Chrome

extension
in
beta.

Adobe
Content
Authenticity
Web
App
Introduced

In
a
newsroom

post
,
Adobe
detailed
the
new
platform.
Notably,
while
it
is
available
as
a
Chrome
extension
currently,
a
free
web
app
will
be
available
in
public
beta
in
the
first
quarter
of
2025.
Users
can
sign
up

here

to
be
notified
when
the
beta
is
available
to
download.
The
company
highlighted
that
the
platform
is
aimed
at
“helping
creators
protect
their
work
from
misuse
or
misrepresentation
and
build
a
more
trustworthy
and
transparent
digital
ecosystem
for
everyone.”

The
app
will
act
as
a
one-stop
shop
for
all
the
attribution
needs
of
creators.
They
can
use
it
to
add
Content
Credentials,
which
is
the
information
added
to
a
file’s
metadata
highlighting
details
about
its
creator.
The
app
can
be
used
to
add
these
attributions
to
a
batch
of
files.
Creators
can
also
choose
the
information
they
want
to
share
and
it
can
include
their
name,
website,
and
social
media
accounts.

Adobe
said
that
Content
Credentials
can
protect
creators
from
unauthorised
use
or
misattribution
of
their
work.
Interestingly,
while
the
web
app
supports
all
the
Adobe
Creative
Cloud
apps,
content
not
created
on
its
platform
can
also
be
attributed.
This
goes
for
images,
videos,
and
audio
files.

Apart
from
attribution,
the
web
app
will
also
let
users
mark
if
they
do
not
want
their
content
to
be
used
by
or
to
train
AI
models.
The
company
highlighted
that
it
only
trains
Adobe
Firefly,
the
in-house
family
of
generative
AI
models,
on
content
which
is
either
publicly
available
or
has
permission
to
use.
However,
adding
the
AI
label
will
also
protect
the
creator
from
other
AI
models
in
the
market.

However,
that
will
only
work
if
other
companies
decide
to
respect
Content
Credentials.
Currently,
only
Spawning,
the
opt-out
aggregator
of
generative
AI,
has
committed
to
recognise
this
attribution.
Adobe
said
it
is
actively
working
to
drive
an
industry-wide
adoption
of
this
preference.
Unfortunately,
there
is
a
downside.
If
a
creator
does
not
allow
their
work
to
be
used
for
AI
training,
the
content
will
not
be
eligible
for
Adobe
Stock.

For
the
latest

tech
news

and

reviews
,
follow
Gadgets
360
on

X
,

Facebook
,

WhatsApp
,

Threads

and

Google
News
.
For
the
latest
videos
on
gadgets
and
tech,
subscribe
to
our

YouTube
channel
.
If
you
want
to
know
everything
about
top
influencers,
follow
our
in-house

Who’sThat360

on

Instagram

and

YouTube
.

Portronics
Pico
13
Portable
Projector
With
4K
Resolution,
Rechargeable
Battery
Launched
in
India

Posted on

Apple Releases Depth Pro, an Open Source Monocular Depth Estimation AI Model


Apple

has

released

several
open-source
artificial
intelligence
(AI)
models
this
year.
These
are
mostly
small
language
models
designed
for
a
specific
task.
Adding
to
the
list,
the
Cupertino-based
tech
giant
has
now
released
a
new
AI
model
dubbed
Depth
Pro.
It
is
a
vision
model
that
can
generate
monocular
depth
maps
of
any
image.
This
technology
is
useful
in
the
generation
of
3D
textures,
augmented
reality
(AR),
and
more.
The
researchers
behind
the
project
claim
that
the
depth
maps
generated
by
AI
are
better
than
the
ones
generated
with
the
help
of
multiple
cameras.

Apple
Releases
Depth
Pro
AI
Model

Depth
estimation
is
an
important
process
in
3D
modelling
as
well
as
various
other
technologies
such
as
AR,
autonomous
driving
systems,
robotics,
and
more.
The
human
eye
is
a
complex
lens
system
that
can
accurately
gauge
the
depth
of
objects
even
while
observing
them
from
a
single-point
perspective.
However,
cameras
are
not
that
good
at
it.
Images
taken
with
a
single
camera
make
it
appear
two-dimensional,
removing
depth
from
the
equation.

So,
for
technologies
where
the
depth
of
an
object
plays
an
important
role,
multiple
cameras
are
used.
However,
modelling
objects
like
this
can
be
time-consuming
and
resource-intensive.
Instead,
in
a

research
paper

titled
“Depth
Pro:
Sharp
Monocular
Metric
Depth
in
Less
Than
a
Second”,
Apple
highlighted
how
it
used
a
vision-based
AI
model
to
generate
zero-shot
depth
maps
of
monocular
images
of
objects.


apple depth pro github Apple Depth Pro

How
the
Depth
Pro
AI
model
generates
depth
maps

Photo
Credit:
Apple

To
develop
the
AI
model,
the
researchers
used
the
Vision
Transformer-based
(ViT)
architecture.
The
output
resolution
of
384
x
384
was
picked,
but
the
input
and
processing
resolution
was
kept
at
1536
x
1536,
allowing
the
AI
model
more
space
to
understand
the
details.

In
the
pre-print
version
of
the
paper,
which
is
currently
published
in
the
online
journal
arXiv,
the
researchers
claimed
that
the
AI
model
can
now
accurately
generate
depth
maps
of
visually
complex
objects
such
as
a
cage,
a
furry
cat’s
body
and
whiskers,
and
more.
The
generation
time
is
said
to
be
one
second.
The
weights
of
the
open-source
AI
model
are
currently
being
hosted
on
a
GitHub

listing
.
Interested
individuals
can
run
the
model
on
the
inference
of
a
single
GPU.

Posted on

Microsoft Copilot Updated With AI-Powered Voice and Vision Features; Recall Availability Expanded


Microsoft

is
rolling
out
new
artificial
intelligence
(AI)
features
to

Copilot
,
the
company’s
native
chatbot.
The
tech
giant
is
now
adding
both
voice
and
vision
capabilities
to
the
chatbot,
after
announcing
them
on
Tuesday.
Microsoft
claims
that
the
new
Copilot
features
are
aimed
at
offering
an
intuitive
design
along
with
“speedy
and
fluent
answers”.
The
Copilot
Voice
feature
is
similar
to
Gemini
Live
and
ChatGPT’s
Voice
Mode. Meanwhile,
the
much-criticised
Recall
feature
will
finally
be
expanded
to
all
Windows
Insider
users
this
month.

Microsoft
Copilot
Updated
With
AI-Powered
Features

In
a

blog
post
,
Microsoft
has
shared
several
details
of
the
new
AI
features
coming
to
Copilot.
These
features
will
be
available
on
the
Copilot
app
on

iOS

and
Android,
the
web
client,
as
well
as
the
Copilot
assistant
on

Windows
.
The
latter
will
only
be
available
on
the
Copilot+
PCs,
which
are
currently
powered
by
the
Snapdragon
X
series
chipsets.


Copilot
Voice

With
four
voice
options,
users
can
now
experience
a
hands-free
voice
conversation
with
Microsoft’s
chatbot.
The
company
said
it
could
be
used
for
brainstorming,
asking
a
quick
query,
or
just
to
have
a
friendly
conversation.

Notably,
while
the
feature
will
offer
a
speech-to-speech
experience,
the
company
has
not
highlighted
whether
the
output
generation
would
be
in
real-time,
or
if
it
would
support
an
emotive
voice.


Copilot
Vision

Copilot
Vision
is
also
being
added.
This
is
a
new
way
to
interact
with
the
AI.
Once
enabled,
the
feature
will
be
able
to
see
what
the
user
sees
on
the
screen.
The
feature
also
supports
voice
mode,
letting
users
ask
verbal
queries
about
the
content.
For
instance,
users
can
show
the
AI
a
picture
of
furniture
and
ask
about
its
colour
palette,
material,
and
more.

Since
this
feature
can
be
perceived
as
invasive
to
user
privacy,
Microsoft
has
also
added
several
layers
of
security
measures.
The
feature
is
opt-in
and
will
not
work
till
the
user
explicitly
activates
it.

Even
after
activating
it,
the
feature
currently
only
works
with
a
limited
number
of
websites.
Further,
the
tech
giant
added
that
the
data
processed
by
the
chatbot
will
not
be
collected
or
used
to
train
the
AI.


Windows
Recall

Microsoft’s
Recall
feature,
which
takes
passive
screenshots
of
a
user’s
laptop
or
desktop
and
can
keep
track
of
the
user’s
activity
locally,
is
now
rolling
out
to
a
wider
user
base.
Microsoft
highlighted
in
a

blog
post

that
the
feature
will
be
rolled
out
to
Windows
Insiders
using
Copilot+
PCs
this
month.

For
now,
it
will
only
be
available
on
the
Snapdragon-powered
PCs.
In
November,
the
tech
giant
will
roll
it
out
to
the
AMD-powered
PCs
as
well.

Posted on

Gemini Live Two-Way Communication Feature Now Available for All Android Users: How to Use


Gemini
Live
,
Google’s
two-way
voice
chat
feature
for
its
artificial
intelligence
(AI)
chatbot,
is
now
available
to
all
Android
users.
The
feature
was
initially
released
to
Gemini
Advanced
users
via
the
Google
One
AI
Premium
plan,
but
now
the
company
is
rolling
it
out
to
all
users.
However,
only
the
basic
version
of
the
feature
is
available
to
users.
The
choice
between
ten
different
voices
is
not
available
in
the
free
tier.
A
report
earlier
this
month
revealed
that

Google

was
rolling
the
feature
out
to
all
Android
users.

Gemini
Live
Feature
Now
Available
to
All
Android
Users

Since
the

Gemini

app
is
still
not
available
on
iOS,
the
Gemini
Live
feature
is
not
available
to
iPhone
users.
However,
Android
users
with
a
compatible
device
and
the
Gemini
app
will
now
see
a
waveform
icon
with
the
sparkle
icon
at
the
bottom-right
corner,
next
to
the
microphone
and
camera
icon.

Tapping
on
the
waveform
icon
will
give
users
access
to
the
Gemini
Live
feature.
Put
simply,
it
is
a
two-way
voice
chat
feature
where
both
the
user
and
the
AI
responds
via
speech.
While
the
AI
speaks
fluently
and
shows
slight
voice
modulation,
it
is
not
similar
to
the

ChatGPT

Advanced
Voice
Mode
feature
which
comes
with
emotive
voice
and
the
capability
to
react
to
user’s
words.

However,
the
feature
is
still
useful
when
the
user
is
on-the-go
and
would
rather
prefer
a
verbal
conversation
to
know
about
the
summary
of
an
email
or
about
an
intriguing
topic.
The
full-screen
interface
of
Gemini
Live
is
similar
to
a
phone
call.
Users
will
see
a
sound
wave
like
pattern
at
the
centre
of
the
screen
and
a
hold
and
end
buttons
placed
at
the
bottom.
If
you’re
interested
in
using
the
feature,
this
is
how
you
can
do
it.

How
to
Use
Gemini
Live
Feature

  1. On
    an
    Android
    device,
    download
    and
    install
    the
    Gemini
    app.
  2. Open
    the
    Gemini
    app.
  3. Find
    the
    waveform
    icon
    at
    the
    bottom-right
    of
    the
    screen.
  4. Tap
    on
    it.
  5. First
    time
    users
    will
    see
    a
    terms
    and
    conditions
    menu.
    Accept
    it.
  6. You
    can
    now
    see
    the
    Gemini
    Live
    interface.
  7. You
    can
    start
    speaking
    to
    trigger
    a
    response
    from
    the
    AI.
  8. Using
    the
    Hold
    button,
    you
    can
    also
    interrupt
    the
    AI
    and
    continue
    with
    another
    prompt.

For
the
latest

tech
news

and

reviews
,
follow
Gadgets
360
on

X
,

Facebook
,

WhatsApp
,

Threads

and

Google
News
.
For
the
latest
videos
on
gadgets
and
tech,
subscribe
to
our

YouTube
channel
.
If
you
want
to
know
everything
about
top
influencers,
follow
our
in-house

Who’sThat360

on

Instagram

and

YouTube
.

HMD
Moon
Knight
Key
Features
Leaked;
Tipped
to
Get
Snapdragon
8s
Gen
3
SoC

Related
Stories