Posted on

Zoom AI Companion 2.0 With New Capabilities, Custom AI Avatars for Zoom Clips Introduced


Zoom

announced
new
artificial
intelligence
(AI)
features
for
its
platform
at
its
annual
Zoomtopia
event
on
Wednesday.
The
video
conferencing
platform
introduced
the
AI
Companion
2.0,
the
second
generation
of
the
AI
assistant
which
can
now
handle
more
tasks.
Available
across
Zoom
Workplace,
the
company’s
AI-powered
collaboration
platform,
it
can
now
access
Zoom
Mail,
Zoom
Tasks,
and
more.
Additionally,
the
company
also
launched
custom
AI
avatar,
which
will
let
users
record
a
video
to
generate
an
AI
clone
of
themselves
which
can
speak
in
a
similar
voice.

Zoom
AI
Companion
2.0
Introduced

In
a
newsroom

post
,
the
company
introduced
the
new
AI
assistant
for
the
Zoom
Workplace
platform.
It
is
similar
to
the
Gemini
assistant
in
Android
smartphones
or
Copilot
in
AI
PCs.
The
AI
Companion
is
the
central
hub
for
most
of
the
AI
features
added
by
the
company.
Users
can
use
it
to
summarise
documents,
generate
text,
and
more.

With
the
second
iteration
of
the
AI
assistant,
Workplace
users
can
access
prompt
suggestions
in
the
side
panel
across
all
Zoom
platforms.
Users
can
also
expand
the
context
of
information
by
connecting

Gmail
,
Microsoft
Outlook,
and
other
similar
apps.
The
assistant
can
also
summarise
unread
messages
within
a
Zoom
Team
Chat
channel
and
recap
email
threads
in
Zoom
Mail.

Within
Zoom
calls,
the
AI
assistant
can
now
answer
questions
outside
of
the
context
of
the
meeting
by
searching
the
web.
Hence,
participants
can
ask
general
knowledge
and
current
affairs
queries
to
the
AI
while
in
the
meeting.
AI
Companion
2.0
can
also
answer
questions
about
the
content
discussed
in
a
meeting
even
after
the
meeting
has
ended.
Further,
the
conversations
with
the
AI
are
now
available
to
refer
back
to
after
the
end
of
the
meeting.

Zoom
Introduces
Custom
AI
Avatar

Another
interesting
feature
unveiled
at
the
event
is
Custom
avatars
for
Zoom
Clips.
It
lets
users
record
a
video
clip,
which
is
then
processed
by
an
AI
model
to
generate
an
avatar.
Only
the
bust
of
the
avatar
is
visible
including
the
head,
shoulders,
and
upper
arms.
The
feature
can
also
generate
a
voice
similar
to
the
user.
If
a
user
adds
a
text
script,
the
AI
avatar
will
be
able
to
speak
and
lip
sync
to
it.

While
this
feature
was
showcased
at
the
Zoomtopia
event,
it
will
not
be
available
to
users
till
2025.

Posted on

Adobe Content Authenticity Web App Introduced; Will Let Creators Add AI Label to Content

Adobe
Content
Authenticity,
a
free
web
app
that
allows
users
to
easily
add
content
credentials
as
well
as
artificial
intelligence
(AI)
labels,
was
introduced
on
Tuesday.
The
platform
is
aimed
at
helping
creators
with
their
attribution
needs.
It
works
on
images,
videos,
and
audio
files
and
is
integrated
with
all
of
the

Adobe

Creative
Cloud
apps.
Alongside
adding
attribution,
creators
can
also
use
the
platform
to
opt
out
of
training
AI
models
using
their
content.
It
is
currently
available
as
a

Google
Chrome

extension
in
beta.

Adobe
Content
Authenticity
Web
App
Introduced

In
a
newsroom

post
,
Adobe
detailed
the
new
platform.
Notably,
while
it
is
available
as
a
Chrome
extension
currently,
a
free
web
app
will
be
available
in
public
beta
in
the
first
quarter
of
2025.
Users
can
sign
up

here

to
be
notified
when
the
beta
is
available
to
download.
The
company
highlighted
that
the
platform
is
aimed
at
“helping
creators
protect
their
work
from
misuse
or
misrepresentation
and
build
a
more
trustworthy
and
transparent
digital
ecosystem
for
everyone.”

The
app
will
act
as
a
one-stop
shop
for
all
the
attribution
needs
of
creators.
They
can
use
it
to
add
Content
Credentials,
which
is
the
information
added
to
a
file’s
metadata
highlighting
details
about
its
creator.
The
app
can
be
used
to
add
these
attributions
to
a
batch
of
files.
Creators
can
also
choose
the
information
they
want
to
share
and
it
can
include
their
name,
website,
and
social
media
accounts.

Adobe
said
that
Content
Credentials
can
protect
creators
from
unauthorised
use
or
misattribution
of
their
work.
Interestingly,
while
the
web
app
supports
all
the
Adobe
Creative
Cloud
apps,
content
not
created
on
its
platform
can
also
be
attributed.
This
goes
for
images,
videos,
and
audio
files.

Apart
from
attribution,
the
web
app
will
also
let
users
mark
if
they
do
not
want
their
content
to
be
used
by
or
to
train
AI
models.
The
company
highlighted
that
it
only
trains
Adobe
Firefly,
the
in-house
family
of
generative
AI
models,
on
content
which
is
either
publicly
available
or
has
permission
to
use.
However,
adding
the
AI
label
will
also
protect
the
creator
from
other
AI
models
in
the
market.

However,
that
will
only
work
if
other
companies
decide
to
respect
Content
Credentials.
Currently,
only
Spawning,
the
opt-out
aggregator
of
generative
AI,
has
committed
to
recognise
this
attribution.
Adobe
said
it
is
actively
working
to
drive
an
industry-wide
adoption
of
this
preference.
Unfortunately,
there
is
a
downside.
If
a
creator
does
not
allow
their
work
to
be
used
for
AI
training,
the
content
will
not
be
eligible
for
Adobe
Stock.

For
the
latest

tech
news

and

reviews
,
follow
Gadgets
360
on

X
,

Facebook
,

WhatsApp
,

Threads

and

Google
News
.
For
the
latest
videos
on
gadgets
and
tech,
subscribe
to
our

YouTube
channel
.
If
you
want
to
know
everything
about
top
influencers,
follow
our
in-house

Who’sThat360

on

Instagram

and

YouTube
.

Portronics
Pico
13
Portable
Projector
With
4K
Resolution,
Rechargeable
Battery
Launched
in
India

Posted on

Apple Releases Depth Pro, an Open Source Monocular Depth Estimation AI Model


Apple

has

released

several
open-source
artificial
intelligence
(AI)
models
this
year.
These
are
mostly
small
language
models
designed
for
a
specific
task.
Adding
to
the
list,
the
Cupertino-based
tech
giant
has
now
released
a
new
AI
model
dubbed
Depth
Pro.
It
is
a
vision
model
that
can
generate
monocular
depth
maps
of
any
image.
This
technology
is
useful
in
the
generation
of
3D
textures,
augmented
reality
(AR),
and
more.
The
researchers
behind
the
project
claim
that
the
depth
maps
generated
by
AI
are
better
than
the
ones
generated
with
the
help
of
multiple
cameras.

Apple
Releases
Depth
Pro
AI
Model

Depth
estimation
is
an
important
process
in
3D
modelling
as
well
as
various
other
technologies
such
as
AR,
autonomous
driving
systems,
robotics,
and
more.
The
human
eye
is
a
complex
lens
system
that
can
accurately
gauge
the
depth
of
objects
even
while
observing
them
from
a
single-point
perspective.
However,
cameras
are
not
that
good
at
it.
Images
taken
with
a
single
camera
make
it
appear
two-dimensional,
removing
depth
from
the
equation.

So,
for
technologies
where
the
depth
of
an
object
plays
an
important
role,
multiple
cameras
are
used.
However,
modelling
objects
like
this
can
be
time-consuming
and
resource-intensive.
Instead,
in
a

research
paper

titled
“Depth
Pro:
Sharp
Monocular
Metric
Depth
in
Less
Than
a
Second”,
Apple
highlighted
how
it
used
a
vision-based
AI
model
to
generate
zero-shot
depth
maps
of
monocular
images
of
objects.


apple depth <a href='https://bernatocomputeragency.co.ke/product/hp-elitebook-revolve-810-g3-intel-core-i5-5th-gen-8gb-ram-256gb-ssd-11-6-inches-hd-uwva-touchscreen-display-windows-10-pro' target='_blank' rel='follow'>pro</a> github Apple Depth Pro” class=”mt-image-center” data-dimension=”1200×675″ src=”https://i.gadgets360cdn.com/large/apple_depth_pro_github_1728295235159.jpg”/></span></p><p class= How
the
Depth
Pro
AI
model
generates
depth
maps

Photo
Credit:
Apple

To
develop
the
AI
model,
the
researchers
used
the
Vision
Transformer-based
(ViT)
architecture.
The
output
resolution
of
384
x
384
was
picked,
but
the
input
and
processing
resolution
was
kept
at
1536
x
1536,
allowing
the
AI
model
more
space
to
understand
the
details.

In
the
pre-print
version
of
the
paper,
which
is
currently
published
in
the
online
journal
arXiv,
the
researchers
claimed
that
the
AI
model
can
now
accurately
generate
depth
maps
of
visually
complex
objects
such
as
a
cage,
a
furry
cat’s
body
and
whiskers,
and
more.
The
generation
time
is
said
to
be
one
second.
The
weights
of
the
open-source
AI
model
are
currently
being
hosted
on
a
GitHub

listing
.
Interested
individuals
can
run
the
model
on
the
inference
of
a
single
GPU.

Posted on

Microsoft Copilot Updated With AI-Powered Voice and Vision Features; Recall Availability Expanded


Microsoft

is
rolling
out
new
artificial
intelligence
(AI)
features
to

Copilot
,
the
company’s
native
chatbot.
The
tech
giant
is
now
adding
both
voice
and
vision
capabilities
to
the
chatbot,
after
announcing
them
on
Tuesday.
Microsoft
claims
that
the
new
Copilot
features
are
aimed
at
offering
an
intuitive
design
along
with
“speedy
and
fluent
answers”.
The
Copilot
Voice
feature
is
similar
to
Gemini
Live
and
ChatGPT’s
Voice
Mode. Meanwhile,
the
much-criticised
Recall
feature
will
finally
be
expanded
to
all
Windows
Insider
users
this
month.

Microsoft
Copilot
Updated
With
AI-Powered
Features

In
a

blog
post
,
Microsoft
has
shared
several
details
of
the
new
AI
features
coming
to
Copilot.
These
features
will
be
available
on
the
Copilot
app
on

iOS

and
Android,
the
web
client,
as
well
as
the
Copilot
assistant
on

Windows
.
The
latter
will
only
be
available
on
the
Copilot+
PCs,
which
are
currently
powered
by
the
Snapdragon
X
series
chipsets.


Copilot
Voice

With
four
voice
options,
users
can
now
experience
a
hands-free
voice
conversation
with
Microsoft’s
chatbot.
The
company
said
it
could
be
used
for
brainstorming,
asking
a
quick
query,
or
just
to
have
a
friendly
conversation.

Notably,
while
the
feature
will
offer
a
speech-to-speech
experience,
the
company
has
not
highlighted
whether
the
output
generation
would
be
in
real-time,
or
if
it
would
support
an
emotive
voice.


Copilot
Vision

Copilot
Vision
is
also
being
added.
This
is
a
new
way
to
interact
with
the
AI.
Once
enabled,
the
feature
will
be
able
to
see
what
the
user
sees
on
the
screen.
The
feature
also
supports
voice
mode,
letting
users
ask
verbal
queries
about
the
content.
For
instance,
users
can
show
the
AI
a
picture
of
furniture
and
ask
about
its
colour
palette,
material,
and
more.

Since
this
feature
can
be
perceived
as
invasive
to
user
privacy,
Microsoft
has
also
added
several
layers
of
security
measures.
The
feature
is
opt-in
and
will
not
work
till
the
user
explicitly
activates
it.

Even
after
activating
it,
the
feature
currently
only
works
with
a
limited
number
of
websites.
Further,
the
tech
giant
added
that
the
data
processed
by
the
chatbot
will
not
be
collected
or
used
to
train
the
AI.


Windows
Recall

Microsoft’s
Recall
feature,
which
takes
passive
screenshots
of
a
user’s
laptop
or
desktop
and
can
keep
track
of
the
user’s
activity
locally,
is
now
rolling
out
to
a
wider
user
base.
Microsoft
highlighted
in
a

blog
post

that
the
feature
will
be
rolled
out
to
Windows
Insiders
using
Copilot+
PCs
this
month.

For
now,
it
will
only
be
available
on
the
Snapdragon-powered
PCs.
In
November,
the
tech
giant
will
roll
it
out
to
the
AMD-powered
PCs
as
well.

Posted on

Gemini Live Two-Way Communication Feature Now Available for All Android Users: How to Use


Gemini
Live
,
Google’s
two-way
voice
chat
feature
for
its
artificial
intelligence
(AI)
chatbot,
is
now
available
to
all
Android
users.
The
feature
was
initially
released
to
Gemini
Advanced
users
via
the
Google
One
AI
Premium
plan,
but
now
the
company
is
rolling
it
out
to
all
users.
However,
only
the
basic
version
of
the
feature
is
available
to
users.
The
choice
between
ten
different
voices
is
not
available
in
the
free
tier.
A
report
earlier
this
month
revealed
that

Google

was
rolling
the
feature
out
to
all
Android
users.

Gemini
Live
Feature
Now
Available
to
All
Android
Users

Since
the

Gemini

app
is
still
not
available
on
iOS,
the
Gemini
Live
feature
is
not
available
to
iPhone
users.
However,
Android
users
with
a
compatible
device
and
the
Gemini
app
will
now
see
a
waveform
icon
with
the
sparkle
icon
at
the
bottom-right
corner,
next
to
the
microphone
and
camera
icon.

Tapping
on
the
waveform
icon
will
give
users
access
to
the
Gemini
Live
feature.
Put
simply,
it
is
a
two-way
voice
chat
feature
where
both
the
user
and
the
AI
responds
via
speech.
While
the
AI
speaks
fluently
and
shows
slight
voice
modulation,
it
is
not
similar
to
the

ChatGPT

Advanced
Voice
Mode
feature
which
comes
with
emotive
voice
and
the
capability
to
react
to
user’s
words.

However,
the
feature
is
still
useful
when
the
user
is
on-the-go
and
would
rather
prefer
a
verbal
conversation
to
know
about
the
summary
of
an
email
or
about
an
intriguing
topic.
The
full-screen
interface
of
Gemini
Live
is
similar
to
a
phone
call.
Users
will
see
a
sound
wave
like
pattern
at
the
centre
of
the
screen
and
a
hold
and
end
buttons
placed
at
the
bottom.
If
you’re
interested
in
using
the
feature,
this
is
how
you
can
do
it.

How
to
Use
Gemini
Live
Feature

  1. On
    an
    Android
    device,
    download
    and
    install
    the
    Gemini
    app.
  2. Open
    the
    Gemini
    app.
  3. Find
    the
    waveform
    icon
    at
    the
    bottom-right
    of
    the
    screen.
  4. Tap
    on
    it.
  5. First
    time
    users
    will
    see
    a
    terms
    and
    conditions
    menu.
    Accept
    it.
  6. You
    can
    now
    see
    the
    Gemini
    Live
    interface.
  7. You
    can
    start
    speaking
    to
    trigger
    a
    response
    from
    the
    AI.
  8. Using
    the
    Hold
    button,
    you
    can
    also
    interrupt
    the
    AI
    and
    continue
    with
    another
    prompt.

For
the
latest

tech
news

and

reviews
,
follow
Gadgets
360
on

X
,

Facebook
,

WhatsApp
,

Threads

and

Google
News
.
For
the
latest
videos
on
gadgets
and
tech,
subscribe
to
our

YouTube
channel
.
If
you
want
to
know
everything
about
top
influencers,
follow
our
in-house

Who’sThat360

on

Instagram

and

YouTube
.

HMD
Moon
Knight
Key
Features
Leaked;
Tipped
to
Get
Snapdragon
8s
Gen
3
SoC

Related
Stories

Posted on

ChatGPT Subscription Prices Could Reportedly Be Hiked Before the End of the Year


ChatGPT
,
the
artificial
intelligence
(AI)
chatbot
by
OpenAI
is
reportedly
about
to
get
more
expensive
for
paid
subscribers.
According
to
a
new
report,
the
AI
firm
is
planning
to
increase
the
subscription
price
for
ChatGPT
Plus
users
by
$2
(roughly
Rs.
167)
a
month.
The
price
hike
is
not
expected
to
stop
there
either,
as
the
company
is
said
to
push
the
monthly
subscription
cost
to
$44
(roughly
Rs.
3,685)
in
the
next
five
years.
The
reason
behind
pushing
for
a
higher
ticket
price
is
said
to
be
OpenAI’s
revenue
ambitions
and
expensive
cost
of
running
operations.

ChatGPT
Subscriptions
to
Reportedly
Get
More
Expensive


According

to
The
New
York
Times,
the
AI
firm
is
planning
to
increase
the
subscription
price
by
$2
by
the
end
of
2024.
Citing
financial
documents
viewed
by
the
publication,
the
report
further
added
that
the
final
price
of
the
ChatGPT
Plus
subscription
might
stand
at
$44
a
month
by
the
end
of
2029,
a
steep
climb
from
the
current
$20
a
month
in
the
US
or
Rs.
1,950
a
month
in
India.

With
the
price
hike,
the
company
reportedly
wants
to
secure
a
revenue
of
$100
billion
(roughly
Rs.
8.3
lakh
crore)
in
2029,
a
majority
of
which
is
expected
to
come
from
its
subscription-based
services.
If
the
AI
firm
is
able
to
achieve
this
target,
it
would
be
raking
in
annual
revenue
similar
to
Reliance
Industries,
Nestlé,
or
Comcast.

OpenAI

currently
has
approximately
10
million
ChatGPT
Plus
users,
according
to
the
report.

The
documents
reviewed
by
the
publication
that
was
meant
for
the
investors
and
OpenAI
reportedly
highlighted
that
it
is
currently
making
“billions”
from
ChatGPT,
and
expects
to
boost
the
numbers
significantly
in
the
coming
years.
Notably,
the
company
is
in
the
process
of
closing
a
funding
round.

Despite
the
big
numbers
projected
in
its
revenue
estimation,
the
company
is
reportedly
struggling
with
optimising
its
operational
costs.
OpenAI
is
said
to
lose
approximately
$5
billion
(roughly
Rs.
41.8
thousand
crore)
this
year,
most
of
which
goes
towards
its
AI-powered
services.
Other
significant
cost-centres
include
employee
salaries
and
office
rent.

Another
major
source
of
expense
is
reportedly
cloud
computing
for
which
the
company
uses
Microsoft’s
services.
Despite
getting
$13
billion
(roughly
Rs.
1.08
lakh
crore)
yearly
due
to
the
partnership,
the
AI
firm
spends
much
of
that
money
on
running
cloud
processing.

Posted on

OpenAI Sees $11.6 Billion Revenue Next Year, Said to Offer Thrive Chance to Invest Again in 2025

Thrive
Capital
is
investing
more
than
$1
billion
of

OpenAI’s

current
$6.5
billion
fundraising
round,
and
it
has
a
sweetener
no
other
investors
are
getting:
the
potential
to
invest
another
$1
billion
next
year
at
the
same
valuation
if
the
AI
firm
hits
a
revenue
goal,
people
familiar
with
the
matter
said
on
Friday.

OpenAI
is
predicting
its
revenue
will
skyrocket
to
$11.6
billion
next
year
from
an
estimated
$3.7
billion
in
2024,
the
sources
said,
speaking
on
condition
of
anonymity.
Losses
are
expected
to
be
as
much
as
$5
billion
this
year,
depending
largely
on
their
spending
for
computing
power
that
could
change,
one
of
the
sources
added.

The
current
funding
round,
which
comes
in
the
form
of
convertible
debt,
is
expected
to
close
by
the
end
of
next
week
and
could
value
OpenAI
at
$150
billion,
cementing
its
status
as
one
of
the
most
valuable
private
companies
in
the
world.

That
valuation
depends
on
pulling
off
a
complicated
restructuring
to
remove
the
control
of
its
non-profit
board
and
also
remove
cap
on
investment
return
to
investors,
a
plan
first
reported
by
Reuters.
There
is
no
specific
timeline
when
the
conversion
could
be
completed.

Thrive
Capital,
which
also
led
OpenAI’s
previous
funding
round,
is
offering
$1.2
billion
from
a
combination
of
its
own
fund
and
a
special
purpose
vehicle
for
smaller
investors.
Other
investors
on
the
new
round
include

Microsoft
,

Apple
,

Nvidia

and
Khosla
Ventures.

The
others
were
not
given
the
option
for
future
investment
at
current
price,
sources
said.
OpenAI’s
valuation
has
soared
quickly,
and
if
it
continues
to
do
so,
Thrive
could
find
itself
increasing
its
stake
next
year
at
a
discounted
price.

Reuters
was
not
able
to
determine
the
revenue
target
associated
with
the
option
for
Thrive,
which
was
founded
by
Joshua
Kushner.

Thrive
and
OpenAI
declined
to
comment.

OpenAI’s
revenue
expectations
far
exceed
CEO
Sam
Altman’s
earlier
projection
of
$1
billion
in
revenue
this
year.
The
main
revenue
sources
are
sales
of
its
services
to
corporations
and
subscriptions
to
its
chatbot.

Its
flagship
product,

ChatGPT
,
is
expected
to
bring
in
$2.7
billion
in
revenue
this
year,
jumping
from
$700
million
in
2023.
The
chatbot
service,
which
charges
a
$20
fee
every
month,
has
about
10
million
paying
users.

The
financials
and
details
about
Thrive’s
additional
option
were
first
reported
by
the
New
York
Times
on
Friday.


©
Thomson
Reuters
2024


(This
story
has
not
been
edited
by
NDTV
staff
and
is
auto-generated
from
a
syndicated
feed.)

Posted on

Intel Xeon 6 Processors and Gaudi 3 AI Accelerators With Ability to Handle Advanced AI Workloads Launched


Intel
 recently
unveiled
new
hardware
focused
at
improving
artificial
intelligence
(AI)
workflows.
The
company
introduced
the
Xeon
6
processor
with
new
Performance-cores
(P-cores)
and
Gaudi
3
AI
Accelerator
for
enterprise
customers
and
data
centres
on
Tuesday.
The
chipmaker
claims
that
the
new
hardware
will
offer
both
higher
throughput
and
better
cost
optimisation
to
enable
optimal
performance
per
watt
and
lower
total
cost
of
ownership.
These
devices
were
launched
to
enable
enterprises
to
handle
the
continuously
increasing
workload
demands
from
more
advanced
AI
models,
according
to
the
chipmaker.

Intel
Xeon
6
Processor
Launched

 The
chipmaker

says
 that
its
new
Intel
Xeon
6
is
equipped
with
Performance-cores.
These
processors
are
not
meant
for
retail
consumers
and
instead
will
power
data
centres
for
enterprises
to
help
them
run
cloud
servers.

Intel
claims
the
Xeon
6
processor
offers
twice
the
performance
of
its
predecessor
due
to
increased
core
count.
It
also
offers
double
the
memory
bandwidth
and
AI
acceleration
capabilities.

Since
it
is
hardware-based
acceleration,
it
can
run
support
very
large
language
models
(LLMs)
with
ease.
It
can
“meet
the
performance
demands
of
AI
from
edge
to
data
centre
and
cloud
environments,”
according
to
Intel.

Intel
Unveils Gaudi
3
AI
Accelerators

Gaudi
3
is
a
new-generation
AI
Accelerator
from
Intel.
These
are
specialised
hardware
chips
designed
to
help
machines
in
speeding
up
AI
tasks,
especially
those
related
to
deep
learning,
machine
learning,
and
neural
networks.
These
include
GPUs,
Application-Specific
Integrated
Circuits
(ASICs),
Field-Programmable
Gate
Arrays
(FPGAs),
and
Neural
Processing
Units
(NPUs).

The
Gaudi
3
AI
Accelerator
features
64
Tensor
processor
cores
and
eight
matrix
multiplication
engines
(MMEs)
which
are
designed
to
accelerate
deep
neural
network
computations.
It
sports
128GB
of
HBM2e
memory
for
training
and
inference,
and
24
200GB
Ethernet
ports
that
enable
scaling
up
the
servers.

Intel’s
new
AI
Accelerator
is
compatible
with
the
PyTorch
framework
and
advanced
Hugging
Face
transformer
and
diffuser
models.
The
company
has
already
tied
up
with
IBM
to
deploy
Gaudi
3
for
IBM
Clouds.
Dell
Technologies
is
also
using
the
infrastructure
for
its
data
centres.

For
the
latest

tech
news

and

reviews
,
follow
Gadgets
360
on

X
,

Facebook
,

WhatsApp
,

Threads

and

Google
News
.
For
the
latest
videos
on
gadgets
and
tech,
subscribe
to
our

YouTube
channel
.
If
you
want
to
know
everything
about
top
influencers,
follow
our
in-house

Who’sThat360

on

Instagram

and

YouTube
.

Dubai’s
VARA
Announces
Stricter
Regulations
for
Crypto
Marketing

Related
Stories