Google Introduces Secure AI Framework, Shares Best Practices to Deploy AI Models Safely

SaveSavedRemoved 0
Deal Score0
Deal Score0


Google

introduced
a
new
tool
to
share
its
best
practices
for
deploying
artificial
intelligence
(AI)
models
on
Thursday.
Last
year,
the
Mountain
View-based
tech
giant
announced
the
Secure
AI
Framework
(SAIF),
a
guideline
for
not
only
the
company
but
also
other
enterprises
building
large
language
models
(LLMs).
Now,
the
tech
giant
has
introduced
the
SAIF
tool
that
can
generate
a
checklist
with
actionable
insight
to
improve
the
safety
of
the
AI
model.
Notably,
the
tool
is
a
questionnaire-based
tool,
where
developers
and
enterprises
will
have
to
answer
a
series
of
questions
before
receiving
the
checklist.

In
a

blog
post
,
the
Mountain
View-based
tech
giant
highlighted
that
it
has
rolled
out
a
new
tool
that
will
help
others
in
the
AI
industry
learn
from
Google’s
best
practices
in
deploying
AI
models.
Large
language
models
are
capable
of
a
wide
range
of
harmful
impacts,
from
generating
inappropriate
and
indecent
text,
deepfakes,
and
misinformation,
to
generating
harmful
information
including
Chemical,
biological,
radiological,
and
nuclear
(CBRN)
weapons.

Even
if
an
AI
model
is
secure
enough,
there
is
a
risk
that
bad
actors
can
jailbreak
the
AI
model
to
make
it
respond
to
commands
it
was
not
designed
to.
With
such
high
risks,
developers
and
AI
firms
must
take
enough
precautions
to
ensure
the
models
are
safe
for
the
users
as
well
as
secure
enough.
Questions
cover
topics
like
training,
tuning
and
evaluation
of
models,
access
controls
to
models
and
data
sets,
preventing
attacks
and
harmful
inputs,
and
generative
AI-powered
agents,
and
more.

Google’s
SAIF
tool
offers
a
questionnaire-based
format,
which
can
be
accessed

here
.
Developers
and
enterprises
are
required
to
answer
questions
such
as,
“Are
you
able
to
detect,
remove,
and
remediate
malicious
or
accidental
changes
in
your
training,
tuning,
or
evaluation
data?”.
After
completing
the
questionnaire,
users
will
get
a
customised
checklist
that
they
need
to
follow
in
order
to
fill
the
gaps
in
securing
the
AI
model.

The
tool
is
capable
of
handling
risks
such
as
data
poisoning,
prompt
injection,
model
source
tampering,
and
others.
Each
of
these
risks
is
identified
in
the
questionnaire
and
the
tool
offers
a
specific
solution
to
the
problem.

Alongside,
Google
also
announced
adding
35
industry
partners
to
its
Coalition
for
Secure
AI
(CoSAI).
The
group
will
jointly
create
AI
security
solutions
in
three
focus
areas

Software
Supply
Chain
Security
for
AI
Systems,
Preparing
Defenders
for
a
Changing
Cybersecurity
Landscape
and
AI
Risk
Governance.

      dagadgets.co.uk
      Logo
      Compare items
      • Cameras (0)
      • Phones (0)
      Compare