It is sometimes subjective. Expert opinion as well as team member input may not always have tangible or easily quantitative assessment. In these cases, we may end up with subjective determination of probability.
Process metrics, that is the data from our organizationís processes, can be useful and that is not always subjective but can be objective. An aggregation of this data makes statistical analysis of the variation possible. If we have been recording key performance metrics over time, we will then have data that can be analyzed to really understand the range of variation of the output from the process.
Besides historic process metrics, we can look at process metrics for work underway. For example, let us consider the testing of the product. We can use metrics such as defect arrival rates and the severity of the defects discovered as metrics to drive decisions.

There are project management measurements we can take as well, besides the critical path. This includes things like schedule and cost variation using techniques such as Earned Value Management:
1. Schedule Variance
2. Cost Variance
3. Schedule Performance Index
4. Cost Performance Index

These project metrics can be recorded over time to allow some measure of prediction (these are areas of potential risk) for the project and the performance trends can help anticipate the potential failure (risk)-and allow for actions to avoid or prevent the situation from become an out and out failure.

There are other metrics that can be used that are not subjective. After identifying the risks and evaluation of the impact from that identified the impact of that specific risk, project, product and organization, we can develop metrics that will help us to understand what is truly happening or the trends of the topic area. For example, perhaps we want to understand the state of the proposed design undergoing Design Verification Testing, there are test metrics that can help us do so rather than wait until the product is completely tested. We can look at the number of test cases conducted and the types and severity of the failures. If we have only accomplished 10% of the testing and we find that we have significant and caustic failures, we can predict that there are likely other latent failures, waiting to be discovered.

I hope this helps, we can revisit this discussion over the course of the training to explore how we can move from too much subjective to objective if you like.