Strategy and Delivery Adviser - AI Safety Institute
Department for Science, Innovation & Technology
Apply before 11:55 pm on Sunday 6th October 2024
Details
Reference number
Salary
Job grade
Contract type
Loan
Length of employment
Business area
Type of role
Strategy
Working pattern
Number of jobs available
Contents
Location
About the job
Job summary
AI is bringing about huge changes to society, and it is our job as a team to work out how Government should respond. It is a once-in-a-generation moment, and an incredibly fast-paced and exciting environment.
AI Safety Institute
Advances in artificial intelligence (AI) over the last decade have been impactful, rapid, and unpredictable. Advanced AI systems have the potential to drive economic growth and productivity, boost health and wellbeing, improve public services, and increase security.
But advanced AI systems also pose significant risks, as detailed in the government’s paper on Capabilities and Risks from Frontier AI published in October. AI can be misused – this could include using AI to generate disinformation, conduct sophisticated cyberattacks or help develop chemical weapons. AI can cause societal harms – there have been examples of AI chatbots encouraging harmful actions, promoting skewed or radical views, and providing biased advice. AI generated content that is highly realistic but false could reduce public trust in information. Some experts are concerned that humanity could lose control of advanced systems, with potentially catastrophic and permanent consequences. We will only unlock the benefits of AI if we can manage these risks. At present, our ability to develop powerful systems outpaces our ability to make them safe. The first step is to better understand the capabilities and risks of these advanced AI systems. This will then inform our regulatory framework for AI, so we ensure AI is developed and deployed safely and responsibly.
The UK is taking a leading role in driving this conversation forward internationally. We hosted the world’s first major AI Safety Summit and have launched the AI Safety Institute. Responsible government action in an area as new and fast-paced as advanced AI requires governments to develop their own sophisticated technical and sociotechnical expertise. The AI Safety Institute is advancing the world’s knowledge of AI safety by carefully examining, evaluating, and testing new types of AI, so that we understand what each new model is capable of. The Institute is conducting fundamental research on how to keep people safe in the face of fast and unpredictable progress in AI. The Institute will make its work available to the world, enabling an effective global response to the opportunities and risks of advanced AI.
Job description
As a Strategy and Delivery Adviser, you will be working with a team of research scientists and engineers to drive forward cutting-edge AI safety research on the highest priority issues.
You’ll provide crucial support for a team working on a specific set of AI safety issues: cyber risks, chem-bio risks, or safety cases (see below for further details).
You might work on building a research strategy for your team, writing submissions and briefs for seniors and ministers, setting up and managing research partnerships, organising events and workshops, forging strong relationships with external stakeholders like major AI companies and other governments, coordinating model tests, or engaging the cross-Whitehall community to ensure our work has impact.
These are multi-faceted roles which involve a mixture of strategy, policy and project management. They will be suitable for people who love getting things done, but who also enjoy big picture thinking and engaging with technical detail.
Successful applicants will work within one of the three following workstreams. If you have a strong preference for any of these, please do state as much in your personal statement:
Cyber Misuse
The aim of the Cyber Misuse team is to deeply understand, assess and mitigate the risks from AI uplifting threat actors in conducting cyber-attacks. This involves developing risk and capability thresholds for cyber that focus on the greatest expected harm, building evaluations that assess for the priority capabilities identified and running these evaluations as part of pre-deployment and lifecycle testing exercises.
In this role you will support with setting out the strategy and delivery of projects that develop our risk modelling or build new evaluations. These projects could range from research and human uplift studies to creating complex automated cyber evaluations. You will contribute to the development of our risk and capability thresholds and communicate our work to key stakeholders by producing briefings and building relationships across government and externally. While we don’t expect you to have a technical or a cybersecurity background, we strongly encourage participants with relevant experience to apply.
Safety Cases
Safety cases are already used as standard in other industries and are structured arguments that a system is unlikely to cause significant harm if deployed in a particular setting. As the AI frontier develops, we expect safety cases could become an important tool for mitigating AI safety risks, whereby AI companies set out detailed arguments for how they have ensured their models are safe. We believe it is possible to significantly develop our understanding of what a good safety case would look like now, even though the field is far from knowing how to write a detailed safety case.
In this role, you’ll support the safety cases policy / strategy lead to ensure that this research has an impact on the safety of AI systems. Strong candidates will have a pre-existing interest in AI safety, and be able to clearly and thoughtfully analyse a safety case for an AI system (although we don’t expect candidates to have a technical background or ML expertise). Alongside strategy and delivery responsibilities, you might attend cross-government meetings on AI policy, or write policy or academic papers on the use of AI safety cases.
Key responsibilities (indicative, with some variation across workstreams):
- Overseeing the delivery of a suite of research projects by our in-house technical team and select external research partners
- Working with our technical researchers to devise and deliver new research projects, in line with AISI’s strategic objectives, and turning their outputs into useful outputs for policy makers
- Helping shape and define the longer-term strategy of the team and contributing to the wider research vision of the AISI
- Acting as a point person on the AISI’s research agenda, communicating the work of the team to senior officials and ministers within AISI and across Whitehall
- Working with National Security partners in organisations across the UK Government
- Building and leveraging a network of research partners and policy stakeholders within and outside of government
- Coordinating the delivery of pre- and post-deployment model tests
Person specification
These are fast-paced and challenging roles, with the potential to have a massive impact on the work of the AI Safety Institute. We are looking for exceptional operators who can drive things forward and take responsibility for achieving the objectives of the team. You will be excellent at building strong, trusting relationships, problem-solving, and co-ordinating complex projects.
Essential criteria
- Start-up mindset / entrepreneurial approach; this will involve navigating in a lot of uncertainty, being quick to adapt, taking a 'trial and get feedback quickly' approach to a lot of pieces of work, and being willing to get stuck in and add value
- Passionate about the mission of the AI Safety Institute, and ideally with a good working knowledge of issues at the intersection between AI and cyber or issues related to AI alignment
- Able to work effectively at pace, make decisions in the face of competing priorities, and remain calm and resilient under pressure
- Able to manage a wide-range of diverse stakeholders to achieve goals
- Proactive and able to identify solutions to complex problems – breaking down large, intractable issues into tangible, and effective next steps
- Can operate with autonomy and ‘self-drive’ work
- Excellent written and oral communication skills, able to communicate effectively, with a range of expert and non-expert stakeholders
- Experience managing complex projects with multiple stakeholders
Behaviours
We'll assess you against these behaviours during the selection process:
- Delivering at Pace
- Communicating and Influencing
Benefits
The Department for Science, Innovation and Technology offers a competitive mix of benefits including:
- A culture of flexible working, such as job sharing, homeworking and compressed hours.
- Automatic enrolment into the Civil Service Pension Scheme, with an employer contribution of 28.97%.
- A minimum of 25 days of paid annual leave, increasing by 1 day per year up to a maximum of 30.
- An extensive range of learning & professional development opportunities, which all staff are actively encouraged to pursue.
- Access to a range of retail, travel and lifestyle employee discounts.
Office attendance
The Department operates a discretionary hybrid working policy, which provides for a combination of working hours from your place of work and from your home in the UK. The current expectation for staff is to attend the office or non-home based location for 40-60% of the time over the accounting period.
Things you need to know
Selection process details
As part of the application process you will be asked to complete a CV and personal statement.
Further details around what this will entail are listed on the application form.
Please use your personal statement (in no more than 500 words) to explain why you would like to work for AISI and why you think you would be a good fit for this role, explaining how your skills and experience match the Essential criteria of the advertisement.
In the event of a large number of applicants, applications will be sifted on the CV.
Candidates who pass the initial sift may be progressed to a full sift, or progressed straight to assessment/interview.
If you are selected for interview, we will ask you to prepare a short presentation. You will be assessed against the Behaviours using behaviour-based questions. The second interview will involve a discussion about your experience and skills related to the role.
Following your interview, if you are successful you will be progressed to a second interview with team members from the workstream.
Sift and interview dates
Sift and interview dates to be confirmed.
Further Information
This role is full time only. Applicants who wish to work an alternative pattern are welcome to apply however your preferred working pattern may not be available and you should discuss this with the vacancy holder before applying.
Existing Civil Servants and applicants from accredited NDPBs are eligible to apply,and can be considered on loan basis (Civil Servants) or secondment (accredited NDPBs). Prior agreement to be released on a loan basis must be obtained before commencing the application process. In the case of Civil Servants, the terms of the loan will be agreed between the home and host department and the Civil Servant. This includes grade on return.
For further information on National Security Vetting please visit the following page https://www.gov.uk/government/publications/demystifying-vetting
Reasonable Adjustment
We are proud to be a disability confident leader and we welcome applications from disabled candidates and candidates with long-term conditions.
Information about the Disability Confident Scheme (DCS) and some examples of adjustments that we offer to disabled candidates and candidates with long-term health conditions during our recruitment process can be found in our DSIT Candidate Guidance. A DSIT Plain Text Version of the guidance is also available.
We encourage candidates to discuss their adjustment needs by emailing the job contact which can be found under the contact point for applicants section.
If you are experiencing accessibility problems with any attachments on this advert, please contact the email address in the 'Contact point for applicants' section.
If successful and transferring from another Government Department a criminal record check may be carried out.
New entrants are expected to join on the minimum of the pay band.
A location based reserve list of successful candidates will be kept for 12 months. Should another role become available within that period you may be offered this position.
Please note terms and conditions are attached. Please take time to read the document to determine how these may affect you.
Any move to the Department for Science, Innovation and Technology from another employer will mean you can no longer access childcare vouchers. This includes moves between government departments. You may however be eligible for other government schemes, including Tax Free Childcare. Determine your eligibility https://www.childcarechoices.gov.uk
DSIT does not normally offer full home working (i.e. working at home); but we do offer a variety of flexible working options (including occasionally working from home).
DSIT cannot offer Visa sponsorship to candidates through this campaign.
DSIT holds a Visa sponsorship licence but this can only be used for certain roles and this campaign does not qualify.
In order to process applications without delay, we will be sending a Criminal Record Check to Disclosure and Barring Service on your behalf.
However, we recognise in exceptional circumstances some candidates will want to send their completed forms direct. If you will be doing this, please advise Government Recruitment Service of your intention by emailing Pre-EmploymentChecks.grs@cabinetoffice.gov.uk stating the job reference number in the subject heading.
Applicants who are successful at interview will be, as part of pre-employment screening, subject to a check on the Internal Fraud Database (IFD). This check will provide information about employees who have been dismissed for fraud or dishonesty offences. This check also applies to employees who resign or otherwise leave before being dismissed for fraud or dishonesty had their employment continued. Any applicant’s details held on the IFD will be refused employment.
A candidate is not eligible to apply for a role within the Civil Service if the application is made within a 5 year period following a dismissal for carrying out internal fraud against government.
Feedback
Feedback will only be provided if you attend an interview or assessment.
Security
Nationality requirements
Working for the Civil Service
We recruit by merit on the basis of fair and open competition, as outlined in the Civil Service Commission's recruitment principles (opens in a new window).
Diversity and Inclusion
Apply and further information
Contact point for applicants
Job contact :
- Name : Benjamin Hilton
- Email : benjamin.hilton@dsit.gov.uk
Recruitment team
- Email : active.campaigns@dsit.gov.uk