Automated Feedback & Assessment


The learner's higher-order thinking skills and processes, and their knowledge competence are automatically measured and assessed by the findingQED platform.  

Resolving each problem scenario to the greatest extent possible requires the learner to employ an array of higher-order thinking skills, such as investigating, fact gathering, analyzing and evaluating, judging and assessing, connecting, deriving inferences, specifying problem solution perspectives, and constructing well-reasoned, fact-based arguments in support of their perspectives.  

Following each problem solving engagement, learners receive automated detailed feedback about their higher-order thinking skills and processes. The objective of the feedback is to assist the learner in independently developing their higher-order thinking skills; that is, to provide insightful instruction that is similar to what an expert instructor would provide if they were advising the learner in real time. This instructive descriptive feedback is provided in 6 different sections associated with each completed scenario, as follows:

Red-line highlighting of any mistakes the learner made in the argument(s) they constructed supporting their solution perspective(s). Note that some scenarios have multiple competing perspectives, any one of which could be considered a reasonable solution perspective. Whether there is a single or multiple reasonable perspectives, the platform provides a red-line highlighting for all of the arguments made by the learner, as follows:

  • omission of necessary facts or inferences
  • inclusion of invalid or unnecessary items
  • poor reasoning / logical sequencing

Description of why each of the red-lined mistakes was erroneous. 

Display of the "model argument" (for each of the possible reasonable solution perspectives) that the scenario creator believed could resolve the scenario issue(s) to the greatest extent possible, so that the learner can compare their arguments to that of the scenario creator.

Descriptive summary of the scenario creator's overall approach or approaches to resolve the issue. This summary describes effective problem solving strategies and key aspects and steps to "cracking" the problem and making the necessary connections and inferences. If there are multiple reasonable perspectives, each will be discussed.

Description explaining each required derived inference. In each instance where a scenario requires an inferential leap, this leap is described in this section, breaking down the inferential leap into the combination of explicit "obvious" items, and the rationale for why and how the connection between explicit items leads to the inference or derivation.

Quantitative metrics. The full set of 33 standard metrics as well as any custom scenario-specific metrics are provided to the learner after each scenario engagement for a quantitative gauge and to quickly spotlight specific skills areas in need of improvement. These quantitative metrics are described more fully below in "Assessment Metrics". 

With these 6 sections of feedback, the learner gains a deeper understanding of how to develop and advance their own higher-order thinking skills, independent of any instructor intervention. If and when a live instructor is present, they can also reference the descriptive feedback and quantitative metrics to further assist the learner, as well as the group as a whole.

Standard Assessment Metrics

There are 33 standard higher-order thinking metrics that are automatically tracked for each and every scenario on the findingQED platform. Analyzing each of the metrics, both individually AND in various combinations, provides deep insight into the higher-order thinking skills of each and every user, and for the group as a whole.

Since the these 33 standard metrics are identical for all scenarios, user skills advancement can be accurately tracked over time and over progressive scenarios. These can also provide statistics for comparing different groups of learners and how these groups perform over time.

Custom Assessment Metrics

In addition to these 33 standard metrics, scenario creators have the opportunity to create their own custom assessment metrics as well. Thus, a scenario creator has the ability to measure achievement on any additional skills and knowledge elements required by the scenario beyond the 33 standard metrics.

For example, the platform categorizes the "skill" of reading a linear graph as a "Derived Evidence Item". This is because the value must be derived by properly interpreting the graph and deriving the correct value. However, there are many other types of "Derived Evidence Items" tracked by the platform, and so, if the scenario creator wants to further distinguish the skill of reading a graph from other derived items, he/she can tag a scenario's linear graph data (both the valid and red herring data) with a custom text label such as: "Interpreting a Linear Graph". Thereafter, for all engagements of that scenario by any learner, the platform will track and report on each learner's ability to correctly interpret the scenario's linear graphs in addition to all the other standard and custom metrics. The scenario creator can establish custom metrics as detailed and granular as they desire.

This custom measurement and tracking feature also enables the assessment of the learner's ability to comprehend and apply any knowledge targeted by the scenario creator. For example, if the concept of "density" (e.g. fish per acre in a pond) is an important learning concept, the scenario creator can tag all the density data (both valid and red herring data) in the scenario with a custom text phrase such as "density".  Thereafter, for all engagements of that scenario by any learner, the platform will track and report on the each learner's ability to correctly comprehend and apply the scenario's density data.  As such, the platform not only tracks 33 standard skill metrics and custom specified skill metrics, it also tracks and reports on the learner's proper application of knowledge, so long as that knowledge application has been tagged for tracking by the scenario creator.

Assessment Reports

There are several ways and means for viewing and analyzing the deep and rich set of assessment data, both online and with downloaded excel files.  Instructors can slice the assessment data by learner, or by groups of learners, and further cut by the standard skills metrics, custom metrics, and by how learners recommended the problems be solved. This robust reporting provides a powerful and informative view of individual and group achievement, highlighting those areas in need of attention and further work. The reports include: 

  • group and learner completion status
  • 33 standard higher-order thinking skill metrics
    • detail metrics per learner
    • summary statistics per group of learners
  • detail argument constructions by actual content and quantitative statistics 
    • per learner
    • per group of learners, or groups of groups of learners
  • reasoning statistics
    • per learner 
    • per group of learners or groups of groups of learners
  • custom skills and knowledge metrics
    • per learner
    • per group of learners, or groups of groups of learners

Patterns of Flawed Thinking Habits 

In addition to diagnosing specific higher-order thinking skill strengths and weaknesses, the assessment data can be mined for patterns among different combinations of metrics. Some of these patterns reveal thinking habits and flaws unrelated to higher-order thinking skills, but valuable none-the-less. This has been "eye opening" for learners and instructors alike.

From the from the "Deeper Dive" menu just below, explore more about how findingQED develops, exercises, and assesses higher-order thinking skills. Feel free to contact us at to learn more about how how we can assist you and the learners in your organization.


A Deeper Dive into the Power of findingQED



How it Works


What Scenarios?


Knowledge vs Thinking


Feedback & Assessment