AI analysis of Phenix results
Authors
- Nigel Moriarty, Peter Zwart, Thomas C Terwilliger
Purpose
The Phenix AI analysis tool is designed to help you interpret your output
files from a run in the Phenix GUI. It analyzes your log file and any output
written to the GUI and the names of any output files that are supplied to the
GUI and creates a summary of the run. Then it analyzes this summary in the
context of the Phenix documentation to describe how this run fits into the
framework of structure determination and suggests next steps.
The AI analysis tool can be accessed in the Phenix GUI in the Results
section of most Phenix GUI runs, next to the Log file button. You can
also access it under the Tools menu.
Google and OpenAI API Keys
You can use Ollama (run on the Phenix server) to run the AI analysis, or
you can use Google (gemini) or OpenAI. If you use Google or OpenAI,
API keys for these servers are used.
You can run AI analyses with Google or OpenAI without
getting your own key, as a shared set
of keys is supplied. The number of analyses with these shared keys
is limited, however.
How the AI analysis works
The AI analysis is carried out in two steps. In the first step, the log file,
along with text written to the GUI and names of output files supplied to
the GUI are summarized using a Langchain analysis with reranking using
FlashRank. In the second
step, the summary from the first step is analyzed in the context of all
the Phenix documentation, transcripts of Phenix tutorial videos, and Phenix
publications to provide background and to suggest next steps to take.
This type of AI does not save or learn from your questions. However the
information in your log file is sent to the Phenix server, and from there,
on to Google gemini.
What to do with the AI analysis
The purpose of the AI analysis is to help you interpret your output and to
suggest ideas for next steps. You should keep in mind that these tools
can make mistakes and give you incorrect interpretations and poor suggestions
at times, so you want to just take the information as suggestions to think
about.
You can combine this AI analysis with the Phenix
chatbot . The chatbot can give
you interactive answers to your questions using the same database of information
as the AI analysis. This allows you to follow up on the AI analysis with
questions to the chatbot. You can also paste part of the output from the
AI analysis into the chatbot along with a question to get more context.
Limitations of the AI analysis
The AI analysis is limited to the sources that is supplied with, so it only
knows about Phenix, and it only knows what is in the documentation, the
videos and newsletters, and the papers we have supplied.
AI tools like this one can also just make mistakes and give incorrect
answers. This does not seem to happen too often with this tool, but
you need to always be on alert when using it. Use the tool as a helper,
don't expect it to always be right.
If a detail is missing in the documentation, the AI analysis will not know it.
Privacy in the AI analysis
If you use the AI analysis tool, your log files are sent to the Phenix server,
and from there on to OpenAI and Gemini. That means the data
could potentially
be used by OpenAI, and Google in any way that they use other
AI data that is sent to them.
Note that access to OpenAI and Gemini is non-commerical only.
List of all available keywords
- job_title = None Job title in PHENIX GUI, not used on command line
- ai_analysis
- log_file = None
- log_as_simple_string = None
- file_list_as_simple_string = None
- display_text_as_simple_string = None
- summary_as_simple_string = None
- analysis_as_simple_string = None
- load_existing_analysis = True
- program_name = None
- analysis_file_name = None
- summary_file_name = None
- write_files = True
- timeout = 180
- display_results = True
- analysis_mode = *standard agent_session advice_preprocessing directive_extraction failure_diagnosis Type of analysis: standard (single log file), agent_session (multi-step AI agent run with structured summary), advice_preprocessing (process user advice into structured format), directive_extraction (extract structured directives from advice), or failure_diagnosis (LLM diagnosis of a diagnosable-terminal error)
- include_llm_assessment = True For agent_session mode, whether to include LLM-generated assessment
- session_json = None For agent_session mode, the session data as JSON string
- raw_advice = None For advice_preprocessing mode, the combined raw advice string
- experiment_type_hint = None For advice_preprocessing mode, experiment type (xray/cryoem)
- file_list_hint = None For advice_preprocessing mode, comma-separated input file names
- user_advice_for_directives = None For directive_extraction mode, the processed user advice string
- failure_error_type = None For failure_diagnosis mode: error type key from diagnosable_errors.yaml
- failure_error_text = None For failure_diagnosis mode: encoded error excerpt from the failing program
- failure_program = None For failure_diagnosis mode: the PHENIX program that failed
- failure_log_tail = None For failure_diagnosis mode: encoded last section of the failing program log
- communication
- run_on_server = True Run job on Phenix server
- wait_for_server = False If server is busy or down, wait up to max_wait_time
- update_wait_time_if_down = 5 Time to wait before trying again on server if it is down
- update_wait_time_if_busy = 5 Time to wait before trying again on server if it is busy
- max_wait_time = 450 Maximum wait time
- max_server_tries = 1 Maximum calls to server
- stop_if_internet_not_available = True Stop if attempting to predict models online and internet is not available
- verbose = False Verbose output on communications
- provider = ollama *google openai Provider for AI analysis. Ollama is cheapest, google is quickest, OpenAI is most thorough.
- rest_server
- url = None The URL for the Phenix REST server Normally set automatically
- url_type = prediction *ai Type of server url (prediction or ai). Normally set automatically
- port = None The port for interacting with the Phenix REST server Normally set automatically
- token = None Authentication token for accessing the Phenix REST server. Normally set automatically
- timeout = 5 Time to keep trying to connect to a server
- quick_check_interval = 1 Time in seconds between job status checks
- check_interval = 5 Time in seconds between job status checks
- max_tries = 20 Number of tries to get result from server
- max_tries_on_availability = 2 Number of tries to see if server is up
- stop_if_server_not_available = True Stop if server is not available (wrong url or down)
- job_size = *small medium large Size of job (small, medium, large)
- requires_gpu = False Job requires GPU
- running_server_test = False This is a test job
- verbose = False Verbose output
- guiGUI-specific parameter required for output directory