Jump to content

Degradation of SUPT


Merle Dixon

Recommended Posts

9 hours ago, Splash95 said:

I recently washed out of UPT. Of course I don't have access to big-picture data, but of the 28 of us who started, 5 failed, 1 SIEd and 1 rolled back and then went to another base for reasons unrelated to flying. I know several studs from the class before ours also washed, though I don't have exact numbers. I have a lot of prior (civ and mil) flying experience, I worked extremely hard, and I still failed. Does that disprove assertions that UPT has degraded or become easy? Certainly not. Still, of the 11 in my own flight for most of the way through (1 rolled back but eventually got wings), the majority went to at least one 89 ride or ground eval. My classmates, to my knowledge, were all dedicated and took the program seriously, and I'm happy that most of them graduated.

Splash: Good on you for giving it a shot. Keep your chin up--your UPT performance doesn't define you as a person.

From what I've seen in the MAF, the recent UPT grads are just as good or as bad as the older ones. I've flown with copilots from Altus that blew me away with strong GK/procedures/flying, and I've flown with copilots that were, well, copilots. What's more frustrating to me than the UPT syllabus is that kids graduate UPT, sit for a while, go to Altus, PCS to base X, sit some more while waiting for SERE and water survival, then finally touch a plane again after 3, maybe 4 months. That's a long time to sit; young pilots' hand-flying skills are very perishable.

I have yet to be shocked by a newly minted pilot's (in)abilities after UPT. Does anyone out in the operational squadrons have similar experiences, or the opposite?

Link to comment
Share on other sites

image.png.6e90bd2a8c982519b1110264d7ab99dc.png

Graduation rates for 2011-2018 with an overall rough average of 84%...imagine if our average graduation rate was 94% without degrading the curriculum. 10% seems like a small change, but that equates to hundreds of extra pilots each year through the pipeline. The quality of the program is another question, but this model is agnostic to that program quality. It is a tool to create a new vetting process to improve who we select to go to UPT in the first place. There are shortfalls in terms of its dependence on the 2010-2018 UPT process, but it is definitely a worthwhile discussion to have, and an alternate way of solving the UPT backlog crisis.

 

Link to comment
Share on other sites



A few things. First, any prediction that is going to be made, will by definition, be "backwards looking" since there's no such thing as future data. And while there definitely may potentially be better predictor variables out there, the difficulty will be to capture them in a consistent and reliable way across a large population which is distributed across multiple communities and multiple time spans - not an easy challenge

You're missing the point I'm making: a descriptive model is not the same as a prediction model. Just because you have a model that looks back to a point in time and can model with high success the outcomes at that time does not mean that model will be useful for predicting outcomes in the future. Extrapolation can be dangerous.

You're right that you have to look back to build the model, but then it has to be continually assessed for validity, particularly when variables can be subjective, or are affected by environmental factors.

For example, degree choice could be affected by other factors: maybe one year an EE degree was required to be on ROTC scholarship, and maybe the next year it's not limited. This would drive good candidates who would've already been successful in UPT to certain degrees to satisfy other goals (like paying for college), which make that variable less useful. Think casual relationships: does an EE degree (or insert any degree) make you a good pilot (or more likely to pass UPT), or do pilots who graduate UPT happen to have EE degrees?

The paper also talked about race being an important factor, though interviews seem to point towards a bias toward white students, leading to the recommendation to continue expanding diversity/inclusion efforts. So in a similar vein, the environment affects how the variables are included in the models.
Link to comment
Share on other sites

3 hours ago, Av8 said:

image.png.6e90bd2a8c982519b1110264d7ab99dc.png

Graduation rates for 2011-2018 with an overall rough average of 84%...imagine if our average graduation rate was 94% without degrading the curriculum. 10% seems like a small change, but that equates to hundreds of extra pilots each year through the pipeline. The quality of the program is another question, but this model is agnostic to that program quality. It is a tool to create a new vetting process to improve who we select to go to UPT in the first place. There are shortfalls in terms of its dependence on the 2010-2018 UPT process, but it is definitely a worthwhile discussion to have, and an alternate way of solving the UPT backlog crisis.

 

Just curious, do the 2011 era numbers reflect IFS washouts? I feel like most classes I knew at CBUS finished with only 0-1 washouts but holy shit IFS slaughtered a lot of big dreams in their first 3 months of the Air Force. 

  • Like 2
  • Upvote 1
Link to comment
Share on other sites

7 hours ago, ViperMan said:

Survivorship bias doesn't have anything to do with the criteria being used in an evaluation - it has to do with the "subset" of data points included in the analysis. See the small section about "missing bullet holes" in the wiki: https://en.wikipedia.org/wiki/Survivorship_bias. It's an interesting and counter-intuitive discussion about how our intuition works and how easily our "reasoning" can be led astray by invisible and incorrect assumptions.

In that situation, the mistake the military made was to only look at bombers that returned from combat - not bombers that didn't make it back (i.e. the ones that were shot down). That led them to draw wildly wrong conclusions about where to armor up the bomber fleet. By way of analogy, this study includes UPT graduates (bombers that "make it back") and UPT washouts (bombers that "don't make it back") - it doesn't include intel school washouts and/or AFIT graduates because that isn't going to tell you anything about graduating from UPT. It didn't make sense to include data where P-38s were or weren't getting shot up because it was a study focused on bombers.

It's not survivorship bias, you're advocating for using more dimensions of data - which is fine.

A few things. First, any prediction that is going to be made, will by definition, be "backwards looking" since there's no such thing as future data. And while there definitely may potentially be better predictor variables out there, the difficulty will be to capture them in a consistent and reliable way across a large population which is distributed across multiple communities and multiple time spans - not an easy challenge. Maybe if we could somehow capture those students who used to "bullseye womprats back on Tatooine" we could enhance our process...it's challenging to get to that level of fidelity though.

Already, the fact that > 85% of UPT candidates make it through provides a high level of confidence that UPT selection criteria are pretty good - squeezing out the last few percent becomes increasingly hard in any endeavor. Any average high school varsity basketball player is in the top 1% of all basketball players on earth. Though we all know there is an enormous difference between that kid and Michael Jordan...

And finally, this is not like saying women can't be pilots. No scientific researcher looking at that data and looking at how people were selected for pilot training back in the 80s would ever draw that conclusion. I get your point about the insight gained being limited by the data, but then so is everything else because we don't have perfect measurement for anything. In any case, all the data used in this study included women.

Correct. Though I would say the model "includes" the unsuccessful events in order to learn from them. Not emphasizes.

So is your suggestion to include people not selected for UPT and then measure how the do in UPT? Or is it to just lump random people into the study who didn't go? I'd pay to see the first executed. If you're suggesting the second, then I think all that study will conclude is that being selected for UPT is the most important data point in determining who graduates from UPT - not exactly a ground-breaking research.

The point is that a study like this is not the same as a vaccine trial. You are already selecting from a group that self-selected and there is nothing you can do as the researcher to affect the outcome you want to examine (UPT graduation) from a group of people that doesn't want to be military pilots.

Not gonna argue, you make some valid points and we could go back and forth. I’m just saying they are looking at our current selection criteria and seeing what’s best, not anything outside of it.

Link to comment
Share on other sites

Splash: Good on you for giving it a shot. Keep your chin up--your UPT performance doesn't define you as a person.
From what I've seen in the MAF, the recent UPT grads are just as good or as bad as the older ones. I've flown with copilots from Altus that blew me away with strong GK/procedures/flying, and I've flown with copilots that were, well, copilots. What's more frustrating to me than the UPT syllabus is that kids graduate UPT, sit for a while, go to Altus, PCS to base X, sit some more while waiting for SERE and water survival, then finally touch a plane again after 3, maybe 4 months. That's a long time to sit; young pilots' hand-flying skills are very perishable.
I have yet to be shocked by a newly minted pilot's (in)abilities after UPT. Does anyone out in the operational squadrons have similar experiences, or the opposite?

As of right now that waiting game isn’t happening. The MAF FTUs are taking UPT grads within a month of graduation.

This was not the case when I went through ~7 years ago. I sat for 6 months between UPT grad and PIQ start.


Sent from my iPhone using Tapatalk
Link to comment
Share on other sites

7 hours ago, MCO said:

Not gonna argue, you make some valid points and we could go back and forth. I’m just saying they are looking at our current selection criteria and seeing what’s best, not anything outside of it.

Yep. What this study really says is this: "Hey Air Force, if you adjust the relative weights of the selection criteria you're already using, you could do about 9% better in choosing your pilot candidates. That's a decent improvement."

One thing I noticed, though, is that the model was not published, nor was the data. Both would be very interesting to play with and see. They talk a lot about how they used a decision-tree model (which tend to be more explicable). This makes it easier for the Bob's to know what's going on inside the black box.

  • Like 2
Link to comment
Share on other sites

33 minutes ago, the g-man said:


As of right now that waiting game isn’t happening. The MAF FTUs are taking UPT grads within a month of graduation.

This was not the case when I went through ~7 years ago. I sat for 6 months between UPT grad and PIQ start.


Sent from my iPhone using Tapatalk

The slow roll I’m talking about is after PIQ, not UPT. Going UPT direct to any FTU probably makes a big difference, but all of the new co’s showing up at my base have had months with nothing more than their cockpit poster. If that.

6 months between UPT and PIQ…good night…

Link to comment
Share on other sites

12 hours ago, jazzdude said:

You're missing the point I'm making: a descriptive model is not the same as a prediction model. Just because you have a model that looks back to a point in time and can model with high success the outcomes at that time does not mean that model will be useful for predicting outcomes in the future. Extrapolation can be dangerous.

You're right that you have to look back to build the model, but then it has to be continually assessed for validity, particularly when variables can be subjective, or are affected by environmental factors.

For example, degree choice could be affected by other factors: maybe one year an EE degree was required to be on ROTC scholarship, and maybe the next year it's not limited. This would drive good candidates who would've already been successful in UPT to certain degrees to satisfy other goals (like paying for college), which make that variable less useful. Think casual relationships: does an EE degree (or insert any degree) make you a good pilot (or more likely to pass UPT), or do pilots who graduate UPT happen to have EE degrees?

The paper also talked about race being an important factor, though interviews seem to point towards a bias toward white students, leading to the recommendation to continue expanding diversity/inclusion efforts. So in a similar vein, the environment affects how the variables are included in the models.

Ok, you really made a strong point and focused heavily about certain types of data being excluded being a problem, though - I didn't get much in the way of descriptive vs. predictive modeling.

In any case, descriptive models / analysis don't exclude data from a data set - predictive models / analysis do exclude certain data from the model. In a typical case (not sure what the specific split was in this study), the a data set is split 70/30 into a training and a test data set. The model created using the training data is then used on the test data (not present in the model) to predict a certain variable (outcome) - in this case whether or not someone graduated from UPT. So this study is certainly using predictive modelling techniques.

To your point about other factors affecting certain variables limiting their value, if there is data that can indicate a true/false or yes/no or 1/0, then machine learning techniques are flexible enough to account for them. If the data isn't present, in many cases, it'll be a wash in the aggregate. But to answer your point directly, having a degree doesn't make you a good pilot, but having a degree is an indicator that you are more likely to graduate from UPT. And further, the more difficult the degree, the higher the likelihood you'll graduate. Though to be extremely clear, this is not shown by the data available, since every USAF pilot has a degree - it's not variable among pilots - but it is well understood to be generally true.

  • Upvote 1
Link to comment
Share on other sites



Ok, you really made a strong point and focused heavily about certain types of data being excluded being a problem, though - I didn't get much in the way of descriptive vs. predictive modeling.
In any case, descriptive models / analysis don't exclude data from a data set - predictive models / analysis do exclude certain data from the model. In a typical case (not sure what the specific split was in this study), the a data set is split 70/30 into a training and a test data set. The model created using the training data is then used on the test data (not present in the model) to predict a certain variable (outcome) - in this case whether or not someone graduated from UPT. So this study is certainly using predictive modelling techniques.
To your point about other factors affecting certain variables limiting their value, if there is data that can indicate a true/false or yes/no or 1/0, then machine learning techniques are flexible enough to account for them. If the data isn't present, in many cases, it'll be a wash in the aggregate. But to answer your point directly, having a degree doesn't make you a good pilot, but having a degree is an indicator that you are more likely to graduate from UPT. And further, the more difficult the degree, the higher the likelihood you'll graduate. Though to be extremely clear, this is not shown by the data available, since every USAF pilot has a degree - it's not variable among pilots - but it is well understood to be generally true.


A few points:

All models throw out data-the more variables you look at, the more complex the model gets, and that complexity doesn't necessarily mean better.

Saying machine learning can account for environmental factors is a hand wave ignoring the problem. Especially if it's a decision tree. Models are only as good as the information put into them, and missing information doesn't always come out in the wash, especially if a significant unknown factor is left out of the model. Start moving into neutal nets and larger data sets, and then yeah, it might pick up on new correlations, but again, it's limited to the data it can see.

The paper doesn't say harder degrees increase your likelihood of graduating UPT, it says if you have an aviation related degree, you're more likely to graduate, followed by engineering degree, then all other degrees. Nothing about strength or difficulty of degree. At face value, I take that as people who are interested in aviation do better in pilot training.
Link to comment
Share on other sites

5 hours ago, jazzdude said:

The paper doesn't say harder degrees increase your likelihood of graduating UPT, it says if you have an aviation related degree, you're more likely to graduate, followed by engineering degree, then all other degrees. Nothing about strength or difficulty of degree. At face value, I take that as people who are interested in aviation do better in pilot training.

The issue at hand is not with the model itself, it is a sound predictive model. The old adage goes, "all models are wrong, some are useful". Obviously there are factors left out, but to increase throughput by 10% equates to ~1000 more pilots produced from SUPT from 2011-2018...imagine if we had produced that many more pilots without spending an extra dime or degrading the curriculum. We would now have more guys to fill white jet roles, more Capts in squadrons, and much healthier manning. It is a simple solution, but it is also a start at thinking differently about the problem, and instead of shortening timelines and reducing flight hours, maybe we can think outside the box to come up with a different approach where it all begins...the selection to attend UPT. 

@ViperMan is correct that we need to see the actual analytic results from the study to start coming to conclusions about which things to look for in an applicant, and I'm sure that can be obtained. However, this is an initial look at the predictive model and its power to help solve the issue, not a descriptive model aimed at analyzing past performance (even though we can gain insights from it). We are in a destructive cycle trying to produce more pilots at the same standard with less, and every time someone is unable to make it through the UPT process, the problem is compounded.

Link to comment
Share on other sites

5 hours ago, Av8 said:

We would now have more guys to fill white jet roles, more Capts in squadrons, and much healthier manning. It is a simple solution, but it is also a start at thinking differently about the problem, and instead of shortening timelines and reducing flight hours, maybe we can think outside the box to come up with a different approach where it all begins...the selection to attend UPT.

I agree it'd be great to have more pilots, and if we could get to a 94% graduation rate, that'd be awesome for us and the taxpayers. But already we're at ~85%. The question I would ask is why is there a need to change the approach to selecting those who attend UPT? The only reason I can think of is because it's not currently working - which it clearly is by any actual metric. So it must be something else.

The average US high school graduation rate is about the same (~88%). The average 4-yr college graduation rate is ~33%, and gets up to ~60% after 6 years. Was there a pilot shortage in the 60s, 70s, and 80s? I don't remember and didn't look it up. The bottom line, IMO, is if the USAF really wants more pilots, they need to get serious and open up another UPT base.

Link to comment
Share on other sites

11 hours ago, jazzdude said:

A few points:

All models throw out data-the more variables you look at, the more complex the model gets, and that complexity doesn't necessarily mean better.

Saying machine learning can account for environmental factors is a hand wave ignoring the problem. Especially if it's a decision tree. Models are only as good as the information put into them, and missing information doesn't always come out in the wash, especially if a significant unknown factor is left out of the model. Start moving into neutal nets and larger data sets, and then yeah, it might pick up on new correlations, but again, it's limited to the data it can see.

The paper doesn't say harder degrees increase your likelihood of graduating UPT, it says if you have an aviation related degree, you're more likely to graduate, followed by engineering degree, then all other degrees. Nothing about strength or difficulty of degree. At face value, I take that as people who are interested in aviation do better in pilot training.

Nor does it mean worse. There is information that matters, and there is information that doesn't matter. Models that contain information that doesn't matter, learn things that don't matter (i.e. are false). All things equal, models that contain a greater amount of "mattering" information vs. models that contain "non-mattering" information are better.

Hence my comment that the model as presented contains information that matters, and is also able to be collected uniformly. I just don't think there's much else in the way that matters to be collected re: pilot candidates - and a model that says you can get to a 94% predictive value agrees.

RE: invisible factors:

What would appear, however, are large, unexplained - and inexplicable - deviations from the model. Those deviations (outliers) would lead whoever is using the model to question what the hell is going on - no such deviation is present in the model indicated, as it is able to predict with 94% accuracy who would graduate. Models that don't account for latent (hidden) variables - what you're addressing - don't approach 94% accuracy.

Link to comment
Share on other sites

The slow roll I’m talking about is after PIQ, not UPT. Going UPT direct to any FTU probably makes a big difference, but all of the new co’s showing up at my base have had months with nothing more than their cockpit poster. If that.
6 months between UPT and PIQ…good night…


Enter FTU-Next! That’s been the poster for all of this…not just a cockpit poster anymore! Can’t find the money for a decent F-16 or C-17 model, but still…also keep using crappy Lockheed VR sims, but whatever. Progress today is better than success tomorrow.

~Bendy


Sent from my iPad using Baseops Network mobile app
  • Like 1
  • Haha 2
Link to comment
Share on other sites

On 7/22/2021 at 7:28 AM, pawnman said:

Makes it a lot harder to fail them out of Phase III. Turns It from a commander's decision and a handshake into an FEB.

Historically only 1.5% of studs wash out of T-1s and 0.9% out of T-38s. So we are talking literal handfuls, maybe, of studs getting through who otherwise might not. Except the T-6 syllabus now has more training requirements and 50% more flying hours, raising the attrition rate from 9% to 17% in Phase II. 

Link to comment
Share on other sites

On 7/26/2021 at 10:33 PM, BashiChuni said:

What’s the current pass rate rate thru UPT? Seems like in my neck of the woods you have a pulse, you graduate

17.7% wash out of T-6 2.5 syllabus. At KEND there have been a steady flow of CRs for months so I don’t doubt the number

Link to comment
Share on other sites

13 minutes ago, Johntsunami said:

17.7% wash out of T-6 2.5 syllabus. At KEND there have been a steady flow of CRs for months so I don’t doubt the number

Is this a result of the new syllabus or the change in IFS going from a screening program to a training program? Or both

Link to comment
Share on other sites

49 minutes ago, Johntsunami said:

Why? The 2.5 studs should be a better product since they show up to T-1 and T-38 with 30+ hours of additional air under their asses compared to 2.0

The current 2.5 studs at RND don’t show up to the T-1…they get some academics and sims/VR and then it’s off to the FTUs.

  • Sad 1
Link to comment
Share on other sites

Why? The 2.5 studs should be a better product since they show up to T-1 and T-38 with 30+ hours of additional air under their asses compared to 2.0

30 additional hours compared to a reduced amount that puts them right around the number of T-6 hours UPT studs should have been getting anyway…so then the reduced flying in Phase III means…they’re way low.
Link to comment
Share on other sites

9 hours ago, Johntsunami said:

17.7% wash out of T-6 2.5 syllabus. At KEND there have been a steady flow of CRs for months so I don’t doubt the number

That current number is heavily influenced by a higher than normal DOR rate.. so take it with a grain of salt that the syllabus is flushing out weak swimmers.  As far as why they DOR rate is higher, that discussion probably belongs in the "What's Wrong.." thread.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...