Firstly, it’s ChatGPT, not GTP and secondly, that first step is something we already do. It doesn’t produce a list of validated or qualifying summits. It produces a list of candidates. Typically, that list of candidates will have somewhere between 10 and 30% of summits not qualifying once map surveying happens, and maybe 20-30% will need shifting to a different location (harder number to quantify so based on my personal experience).
I encourage you to read the following to understand some of the technical problems that manifest around it:
Then I would suggest you read a bit about Large Language Models and the problems they solve, versus the problems they don’t. ChatGPT, useful though it is for some things, is not optimised as a text processing large language model to be doing summit validation for SOTA. Some other form of ML may help but Gemini, ChatGPT or Claude are not it.
What I have been thinking about lately is how do we quantify summits that are not at risk of being eliminated from the candidates list - it requires a measure of the bumpiness of the surrounding terrain, but I haven’t yet wrapped my head mentally around how that might work in code. Basically finding a key col and ensuring there’s not too many <P150 sub summits within that col’s catchment area/contour tree or something. If this could be quantified, it could be used to potentially determine starter lists of low-risk summits, but we’re nowhere near that yet.