Generative AI applications, like ChatGPT and its successors, are powerful, potentially disruptive programmes able to contribute positively to the process of creating written or graphic content. Their use, however, raises major concerns if used irresponsibly and we consider it essential that any such tools and applications are used within a clear ethical and legal framework.
To assist and inform staff, clients and partners on the use of such tools within the firm, Priority Consultants Group has set out a series of guidelines on the use of Generative AI.
Protection of Client-confidential Information and Intellectual Property
- No confidential client or company information is to be entered into a Generative AI tool or platform. This is because any information entered may be absorbed by such tools and used for training their model or even in output subsequently generated. Any text entered as a query theoretically becomes part of the public sphere and is no longer under our control.
- We do not use Generative AI images as final visuals. The AI generated work may be at risk of infringing the copyright of whatever original the image was based on, and it may not be possible to protect the copyright of the work itself.
Commitment to accuracy
- All sources and references provided by generative AI tools, are validated and checked by carrying out our own independent search. This is because some generative AI chat tools have been found to fabricate convincing-sounding information.
- To the best of our ability, we always check AI-generated output for inadvertent plagiarism, or infringements of copyright or trademark.
- We insist that vendors adhere to the same standards of transparency, by including this policy in the contracts they sign with us.
Open and transparent relationships with our stakeholders
- We disclose to clients if Generative AI tools are used in the process of creating content on their behalf beyond the initial research stage. We do this either by way of a clause in our contract with the client, or via case-by-case disclosure if the use of the AI tool is infrequent.
- Internally, we require employees to disclose where they are using, or intend to use, Generative AI in their drafting or creative process.
- If we use content created by Influencers on behalf of clients, we require that they also disclose in writing any use of generative AI tools as part of their content creation process.
- We respect the rights of other creators and never prompt generative AI to develop creative content similar to that of a specific writer or artist.
Generative AI and our commitment to Diversity, Equity and Inclusion
- We are alert to the risk of biases appearing in AI-generated output and have processes in place to check for and ameliorate any such issues.
- We do not rely on generative AI tools to translate documents into other languages and as appropriate, work with our own staff, or external professionals, who have the required language capabilities.
- We do not use generative AI as a replacement for the diversity of experiences, insights, domain expertise and local knowledge of our multi-national region-wide team.
Compliance
- We conduct company-wide training on best practices and the proper ethical and legal use of AI to protect both our own brand and the brands of our clients.
- Any breaches of this policy should be reported immediately to the relevant supervisor and Priority Consultants will conduct an investigation into reported breaches and take appropriate disciplinary action if necessary.