Back to All Events

AI & LLM Session #2

The event discussed the importance of education and policies in AI governance, emphasising the role of trusted advisors to prevent users from going rogue. Key points included:

  • Governance of AI and Education: A carrot-and-stick approach was suggested to make AI usage more appealing. Starting with a hardline approach to educate users about AI. Trusted advisors play a crucial role in building trust and guiding decisions.

  • Blocking and Restrictions and Data Classification: Tools like Zscaler block certain generative AI tools. Managing uncertainty through small iterative approaches, such as using call centre apps for summarising, was discussed. The need for better data classification and resolving historical data issues in SharePoint was highlighted.

  • Managing Expectations and Solution-Oriented Approach: Breaking down projects into proof of concepts (POCs) and testing different tools monthly. Junior developers found AI tools helpful, but more experienced developers did not. This feedback was used to create a mini anti-business case. Focus on finding solutions to problems rather than just outcomes.

  • Tempering Expectations and AI Categories: Managing expectations is tricky, especially for less technical people. Running sessions with the executive team and external experts, like those from Google, helps provide a realistic view of AI's capabilities. AI was categorised into everyday AI, embedded AI, and custom AI.

  • Shadow Reading and Tools and Open Communication: Discussed the use of AI tools like Otter AI and the preference for Teams over Zoom. An AI questionnaire was mentioned to gauge organisational adoption and usage. Making AI initiatives as open as possible to see what others are doing. Comparing AI to health and safety to get executive buy-in and leveraging existing systems and frameworks.

 

Previous
Previous
21 October

AI & LLM Session #1

Next
Next
22 October

AI & LLM Session #3