The 91¶ÌÊÓƵ PA program uses an ongoing iterative process to systematically apply statistical and/or logical techniques to describe, summarize, and analyze data. It allows for evaluation of overall program effectiveness, identification of program strengths and weaknesses, and delineation of plans for modifications and improvement. The process is guided and informed by our mission, goals, and the ARC-PA "C" Standards.
The program has four committees tasked with gathering, analyzing, and triangulating data from across multiple data collection tools to help determine cause and effect relationships, followed by developing action plans to address the desired modifications in triangulated areas of improvement which will be implemented by faculty and staff. The program committes will ensure the program's ongoing compliance with ARC-PA Standards by facilitating and implementing the self-assessment processed outlined below and making sure the program achieves its published mission, goals, and outcomes. Program committees carry out the aforementioned aspects of the self-assessment process and report back to the program director who has ultimate oversight of the process and assures the continuous review, improvement, and strengthening of the program's ability to deliver high-quality education.
The 91¶ÌÊÓƵ PA Program process of program self-assessment is diagrammed below:
Decisions regarding types of data collected are informed by ARC-PA Standards and the program's mission and goals. Raw data, (e.g., quantitative/qualitative, direct/indirect) are gathered by program committees throughout all phases of the program in accordance with program-set data collection times. Program-specific Data Collection Tools allow committees to collect evidence of program effectiveness in administrative functions/operations, student didactic and clinical learning, faculty effectiveness and sufficiency, and success in meeting program outcomes. The Data Collection Tools Index used in the self-assessment process provides links to the tools and defines what and how data (quantitative and qualitative) will be collected; which committee is responsible for gathering, collecting and analyzing the data; Likert scales assigned to each collection tool; planned benchmarks for strengths as well as areas in need of improvement; and finally, timing of data collection and analysis.
The following benchmarks for the self assessment were developed and determined using input from both internal and external sources. Internal benchmarking input originated from program faculty discussion as well as benchmarks of other programs within the Dumke College of Health Professions. External input came from benchmarking to match-made PA programs and programs in the local region. Most program reported using a Likert scale of 1.0 - 5.0 with a benchmark of 3.5/5.0 defined as adequate and anything below 3.0/5.0 identified as areas in need of improvement. These benchmarks were converted into the corresponding percentages of 70% and 60% respectively. The following is a detailed breakdown of the 91¶ÌÊÓƵ PA program benchmarks:
Quantitative Data Analysis |
Qualitative Data Analysis |
The program utilizes a 4-point Likert and standard 4.0 grading scale for quantitative data across all Data Collection Tools. The 91¶ÌÊÓƵ faculty chose to use a 4.0 Likert scale instead of the 5.0 Likert scale to eliminate a neutral option thus requiring responders to state an opinion. When considering benchmarks, faculty agreed that the program should be held to high standards, so the adequate benchmarks were set accordingly at 75% (higher than the 70% set by other programs.
Analyzed quantitative data will be categorized into one of four categories:
- Program Strengths:
>3.5-4.0
- Adequate Benchmark:
>3.0-3.49
- Areas in need of Monitoring:
2.4-2.99*
- Areas in need of Improvement:
<2.39
|
Analysis of qualitative data will consist of coding responses and categorizing them to identify themes, both positive and negative. A theme is established when >20% of those surveyed comment on a specific item.
Analyzed qualitative data will be categorized into one of four categories:
- Program Strengths:
>60% positive themes and <50% negative themes (4)
- Meets Benchmark:
0-39.9% negative themes - Adequate (3)
40-49.9% negative themes - Monitor (2)*
- Below Benchmark:
>50% negative themes (1)
One comment indicating a possible violation of professional norms, program policy, university policy, or applicable local laws will also reach actionable levels.
|
*Any areas that require Monitoring for consecutive years of 3 out of any 5 years will convert to Areas in Need of Improvement. |