We recognize that while AI offers immense potential, it also comes with data security & privacy risks, ethical issues, and concerns with quality & accuracy of work products produced with AI. That’s why we’re committed to an approach we call Mindful Innovation—a balance of progress with responsibility.
Our ultimate goal is to deliver Authentic Intelligence— grounded in real-world learning, human insight, and lived experience.
Our commitment to being person-first is in our name. As technology leaders, we want to adopt new technologies (and become experts in them) without compromising our dedication to security, our values, and being the friendliest IT service provider you can find.
In the rapidly changing technology, policy, and security landscape, our team will take an iterative approach to keep our internal use of AI and our guidance for our clients up to date.
We encourage our staff to experiment with generative AI to boost productivity and enhance workflows. By using generative AI tools, we’ll better understand its uses and its drawbacks, and can advise our clients accordingly. Any use of AI must be in accordance with our internal AI use policy, as well as our tenets of service and dedication to high-quality work. We are committed to a human-first approach, ensuring AI supports staff without replacing them.
Beyond generative AI, our team will never outsource critical human decisions to AI algorithms. We recognize the long history of bias baked into decision-making AIs, which is counter to our values as a company committed to building a more inclusive workplace.
Our client roster includes AI early adopters as well as those still figuring out whether AI adds value to their work. Each client has their own threat model and risk tolerance. Our guidance is focused on keeping our clients informed on the latest best practices in AI, along with how to mitigate any privacy and security risks.
When providing AI guidance to our clients, we always seek first to clarify what type of artificial intelligence we are discussing. In this document, the type of AI that we are most commonly referring to is Generative AI – tools that generate content like ChatGPT, Google Gemini, or Apple Intelligence. Ensuring that we are all using a shared frame of reference is an important baseline for talking about AI internally and with our clients.
Our AI guidance highlights our security assessment of each specific technology first and foremost, since our clients rely on our expertise for their cybersecurity strategy and decision-making. We recognize that there is no “one size fits all” decision to be made about AI use, and strive to include information that helps our clients be informed decision-makers as they weigh their own varied approaches to AI adoption.
While our guidance focuses on privacy and security, we recognize that there are a variety of additional concerns with AI use: environmental, ethical, liability, and copyright concerns, to name a few. We will highlight these concerns where we are aware of them and have information or resources to share, but encourage our clients to seek additional guidance from experts in compliance and risk management as necessary.
Personified’s generative AI policy encourages our staff to thoughtfully use AI as a tool to enhance productivity, creativity, and efficiency—never to replace human insight. We embrace AI for tasks like brainstorming, editing, summarizing, and visualizing information, while emphasizing the importance of human oversight. All AI-generated content must be reviewed for accuracy, aligned with Personified’s voice, and clearly labeled when AI is involved. Transparency and consent are essential, especially in meetings or when handling client-facing materials.
Our policy sets clear ethical boundaries, including prohibiting the upload of client data without consent and avoiding AI use in decisions where bias could be harmful, such as hiring. Only approved tools may be used, and new tools must go through a thorough and consensus-based review process. At its core, the policy reflects a human-first ethos: AI should support high-quality, responsible work that reflects the company’s mission and values.
We understand that generative AI is a resource that has externalized costs. We use generative AI with intentionality.
Personified has a vendor assessment process to verify the security and privacy of the generative AI tools that we use. We approve use of tools that comply with our stringent privacy and security requirements.
In an era of rapid technological advancement, Personified remains steadfast in its commitment to thoughtful, secure, and human-centered AI use. Our approach, grounded in Mindful Innovation, prioritizes both progress and responsibility, ensuring that AI serves as a supportive tool rather than a substitute for human judgment. By embracing generative AI with clear ethical boundaries, rigorous security standards, and a person-first mindset, we help our clients and team members navigate the evolving AI landscape with confidence and clarity. Ultimately, our goal is to foster Authentic Intelligence—where human insight and cutting-edge technology work hand in hand to deliver meaningful, inclusive, and secure outcomes.