FedRAMP’s Emerging Technology Prioritization Framework – Overview and Request for Comment January 26 | 2024 The President signed Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI EO) on Oct. 30. The intent of the AI EO is to “help govern the development and use of AI safely and responsibly, and is therefore advancing a coordinated, federal government-wide approach to doing so”. To meet that goal, the president directed GSA to prioritize emerging technologies in the FedRAMP authorization process, beginning with generative AI, that can help federal agencies to more effectively accomplish their missions. GSA is publishing this draft Emerging Technology Prioritization Framework. This document describes the operational framework for how FedRAMP will prioritize certain Cloud Service Offerings (CSOs) that provide specific emerging technologies during the FedRAMP authorization process. The prioritization process will be integrated into existing and future FedRAMP authorization paths. The prioritization framework will not create additional authorization pathways and will maintain the same rigorous and thorough authorization requirements. The first three prioritized emerging technology capabilities use large language models (LLMs) and include: 1) chat interfaces, 2) code-generation and debugging tools, and 3) prompt-based image generators. To ensure the draft framework is clear, addresses the goals of the EO 14110, and meets the needs of as many stakeholders in the ecosystem as possible, GSA is releasing this draft for public comment and invites your input. GSA requests stakeholders review the draft Emerging Technology Prioritization Framework and submit any comments, questions, or recommendations using the input form by March 11, 2024. When providing feedback, please include the section number your feedback pertains to. GSA is especially interested in hearing feedback on the questions below: The fundamental goal of prioritizing specific technology capabilities is to ensure the most important capabilities are available to federal agencies. To ensure FedRAMP prioritizes offerings that meet agency needs, the current draft requires some basic reporting of benchmarks to determine eligibility (not overall performance). Will this requirement help ensure the prioritized offerings meet agency needs? How should the benchmarking process be structured to keep the process focused on eligibility and avoid agencies or CSPs interpreting it as setting a more general bar of quality? Are the specific benchmarks provided sufficient? Are they too constraining? Are they too flexible? Which entity should determine which benchmark to use: the agency sponsor, or the CSP? Could the overall approach to the AI criteria be simplified? How can FedRAMP best assess whether providing a relevant emerging technology is the “primary purpose” of the cloud service offering? Is there any other information FedRAMP should consider before allowing a specific CSO to be prioritized in the queue? Is the process outlined in this prioritization framework reasonable for CSPs to work with? Is there relevant information that could be collected from CSPs to facilitate quicker adoption by agencies? Should GSA publish more information about how different benchmarks better apply to specific AI use cases? In the future, are there factors that would merit prioritization other than emerging technologies? You can view the read-only document that displays the submitted comments here if you would like to build upon the existing comments in your feedback. Thank you for your feedback and partnership in modernizing FedRAMP to meet our stakeholders’ needs. For any questions, or if you have issues accessing the form, please email [email protected].
This content was originally published here.