LET Reviewer: ASSESSMENT OF STUDENT LEARNING

Download now General Education Keywords&meaning 1

ASSESSMENT OF STUDENT LEARNING

• Assessment of learning focuses on the development and utilization
of assessment tools to improve the teaching-learning process.

• Measurement refers to the quantitative aspect of evaluation where
it involves the outcomes that can be quantified statistically.

• Measurement is also defined as the process in determiningand
differentiating the information about the attributes or characteristics
of things.

• Evaluation is the qualitative aspect of determining the outcomes of
learning and it involves value judgment.

• Testing is a method used to measure the level of achievement or
performance of the learners.

• Test consists of questions or exercises or other devices for measuring
the outcomes of learning.

• The three CLASSIFICATIONS OF TESTS are according to manner of
response, according to method of preparation, and according to the
nature of answer.

• Objective tests are tests, which have definite answers and therefore
are not subject to personal bias.

• Teacher-made tests or educational tests are constructed by the
teachers based on the contents of different subjects taught.

• Diagnostic tests are used to measure a student’s strengths and
weaknesses, usually to identify deficiencies in skills or performance.

• Formative testing is done to monitor students’ attainment of the
instructional objectives.

• Summative testing is done at the conclusion of instructional and
measures the extent to which students have attained the desired
outcomes.

• A standardized test is already valid, reliable and objective and is a
test for which contents have been selected and for which norms or
standards have been established.

• Standards or norms are the goals to be achieved, expressed in terms
of the average performances of the population tested.

• Criterion-referenced measure is a measuring device with a
predetermined level of success or standard on the part of the testtakers.

• Norm-referenced measure is a test that is scored on the basis of the
norm or standard level of accomplishment by the whole group taking
the tests.

• The TYPES OF ASSESSMENT are Placement Assessment, Diagnostic
Assessment, Formative Assessment, and Summative Assessment.

• Placement Assessment is concerned with the entry performance of
the student, where its purpose is to determine the prerequisite
skills, degree of mastery of the course objectives and the best mode
of learning.

• Diagnostic assessment is a type of assessment given before
instruction where it aims to identify the strengths and weaknesses
of the students regarding the topics to be discussed.

• Formative assessment is a type of assessment used to monitor the
learning progress of the students during or after instruction.

• Summative assessment is a type of assessment usually given at the
end of a course or unit.

• The MODES OF ASSESSMENT are Traditional Assessment,
Performance Assessment, and Portfolio Assessment.

• Traditional assessment is in which student typically select an answer
or recall information to complete the assessment.

• Performance assessment is an assessment in which students are
asked to perform real-world tasks that demonstrate meaningful
application of essential knowledge and skills.

• Portfolio assessment is based on the assumption that it is a dynamic
assessment.

• The most reliable tool for seeing the development in a student’s
ability to write is a portfolio assessment.

• The KEY TO EFFECTIVE TESTING includes the Objectives, Instruction,
Assessment, and Evaluation.

• Objectives is the specific statements of the aim of the instruction,
where it should express what the students should be able to do or
know as a result of taking the course.

• Instruction consists of all the elements of the curriculum designed
to teach the subject, including the lesson plans, study guide, and
reading and homework assignment.

• Assessment is the process of gathering, describing or quantifying
information about the performance of the learner and testing

components of the subject.

• The factors to consider when constructing GOOD TEST ITEMS are
validity, reliability, administrability, scorability, appropriateness,
adequacy, fairness, and objectivity.

• Validity refers to the degree to which a test measures what it is
intended to measure.

• To test the validity of the test, it is to be pretested in order to
determine of it really measures what it intends to measure or what
it purports to measure.

• Reliability refers to the consistency of scores obtained by the same
person when retested using the same instrument or one that is
parallel to it.

• The test of reliability is the consistency of the results when it is
determined to different groups of individuals with similar
characteristics in different places at different times.

• Scorability states that the test should be easy to score, directions
for scoring should be clear, and the test developer should provide
the answer sheet and the answer key.

• Appropriateness mandates that the test items that the teacher
construct must assess the exact performances called for in the
learning objectives.

• Adequacy states that the test should contain a wide sampling of
items to determine the educational outcomes or abilities so that
the resulting scores are representative of the total performance in
the areas measured.

• Fairness mandates that the test should not be biased to the
examinees.

• Evaluation is used to examine the performance of students and
comparing and judging its quality.

• The TYPES OF VALIDITY are Content Validity, Criterion-related
validity, and Concurrent Validity.

• Content validity is a validation that refers to the relationship
between a test and instructional objectives and it establishes the
content so that the test measures what it is supposed to measure.

• Criterion-Related Validity is a type of validation that refers to the
extent to which scores from a test relate to theoretically similar
measures.

• The two types of CRITERION-RELATED VALIDITY are Construct Validity
and Predictive Validity.

• Construct validity is a type of validation that measures the extent to
which a test measures a hypothetical and unobservable variable or
quality, such as intelligence, math achievement, performance
anxiety, etc.

• Predictive validity is a type of validation that measure the extent to
which person’s current test results can be used to estimate accurately
what that person’s performance or other criterion, such as test score,
will be at a later time.

• Concurrent validity is a type of validation that require the correlation
of the predictor or concurrent measure with the criterion measure,
which can be used to determine whether a test is useful to use as a
predictor or as a substitute measure.

• Objectivity is the degree to which personal bias is eliminated in the
scoring of the answers.

• Nominal scales classify objects or events by assigning numbers to
them, which are arbitrary and imply no quantification, but the
categories must be mutually exclusive and exhaustive.

• Ordinal scales classify and assign rank order.

• Interval scalesor also known as equal interval or equal unit is needed
to be able to add or subtract scores.

• Ratio scaleis where the zero is not arbitrary; a score of zero includes
the absence of what is being measured.

• Norm-referenced interpretation is where an individual’s score is
interpreted by comparing it to the scores of a defined group, often
called the normative group.

• Criterion-Referenced Interpretation means referencing an
individual’s performance to some criterion that is a defined
performance level.

• The stages in TEST CONSTRUCTION are Planning the test, Trying Out
the test, Establishing Test Validity, Establishing the Test Reliability,
and Interpreting the Test Score.

• Frequency distribution is a technique for describing a set of test
scores where the possible score values and the number of persons
who achieved each score are listed.

• Measures of central tendency is computed to know where on the
scale of measurement a distribution is located.

• Measures of dispersion is used to know how the scores are dispersed
in the distribution.

• The three commonly used MEASURES OF CENTRAL TENDENCY are
the mean, median and mode.

• The mean of a set of scores is the arithmetic mean and is found by
summing the scores and dividing the sum by the number of
scores.

• Median is the point that divides the distribution in half, which is
half of the scores fall above the median and half of the scores fall
below the median.

• Mode is the most frequently occurring score in the distribution.

• Range is the difference between the highest score and the lowest
score.

• The variance measures how widely the scores in the distribution
are spread about the mean.

• Variance is the average squared difference between the scores and
the mean.

• The standard deviation indicates how spread out the scores are, but
it is expressed in the same units as the original scores.

• A graph of a distribution of test scores is better understood that the
frequency distribution or a table of numbers because the general
shape of the distribution is clear from the graph.

• A teacher must use an Essay type of test the student’s ability to
organize ideas.

• NSAT and NEAT results are interpreted against a set mastery level,
which means that the tests fall under criterion – referenced test
because it describes the student’s mastery of the objectives.

• The first step in planning an achievement test is to define the
instructional objective.

• Skewed score distribution means the score are concentrated more
at one end or the other end.

• Normal distribution means that the mean, median, and mode are
equal.

• When the computed value or r for Math and Science is 0.90, it implies
that the higher the scores in Math, the higher the scores in Science
because r=0.90 means a high positive correlation.

• An objective that is in the highest level in Bloom’s taxonomy is rating
three different methods of controlling tree growth because it deals
with evaluation.

• Inferential is a type of statistics that draws conclusions about the
sample being studied.

• Generosity error is the error teachers commit when they tend to
overrate the achievement of students identified by and aptitude
tests as gifted because they expect achievement and giftedness to
go together.

• Portfolio assessment measures the students’ growth and
development.

• Formative testing is the test most fit for mastery learning because it
is done after or during a discussion where the feedback can be used
to determine whether the students have a mastery of the subject
matter.

• A characteristic of an imperfect type of matching set is that an item
may have no answer at all.

• Determining the effectiveness of distracters is included in an item
analysis.

• Discrimination index is the difference between the proportion of
high-performing students who the item right and the proportion of
low-students who got an item right.

• A positive discrimination index means that more students from the
upper group got the item correctly.

• A negative discrimination indextakes place when the proportion of
the students who got an item right in the low performing group is
greater than the students in the upper performing group.

• Zero discrimination happens when the proportion of the student
who got an item right in the upper-performing group and lowperforming
group is equal.

• When points in the scatter gram are spread evenly in all directions,
this means that there is no correlation between two variables.

• A norm-referenced statement is comparing the performance of a
certain student with the performance of other student/s.

• Content is a type of validity that is needed for a test on course
objectives and scopes.

• When there are extreme scores the mean will not be a very reliable
measure of central tendency.

• The sum of all the scores in a distribution always equals the mean
times the N because the sum of all the scores is equal to the product
of the mean and the number of scores (N). Formula: Mean =
Summation of Scores/N

• A Z-value can be used to compare the performance of the students,
because it tells the number of standard deviations equivalent to a
raw score, where the higher the value of Z score, the better the
performance of a certain student is.

• Mean is the measure of position that is appropriate then the
distribution is skewed.

• The analysis of Variance utilizing the F-test is the appropriate
significance test to run between three or more means.

• In standard deviation, the higher the value of standard deviation
on the average, the scores are farther from the mean value,
where as the smaller the value of the standard deviation on
the average, the scores are closer to the mean value.

• When the value of standard deviation is small, the scores are
concentrated around the mean value because the smaller the value
of the standard deviation the more concentrated the scores are to
the mean value.

• When the distribution is skewed the most appropriate measure of
central tendency is Median.

• In the parlance of test construction, TOS means Table of
Specifications.

• Range is a measure of variation that is easily affected by the
extreme scores.

• Mode is the measure of central tendency that can be determined
by mere inspection because mode can be identified by just counting
the score/s that occurred the most in a distribution.

• The description of each criteria to serve as standard, very clear
descriptions of performance level, rating scale, and mastery levels
of achievement are considerations that are important in developing
a SCORING RUBRIC.

• A rubric is developmental.

• Performance-based assessment emphasizes process and product.

• Kohlberg and other researchers used moral dilemma to measure
the awareness of values.

• PROJECTIVE PERSONALITY TEST includes Sentence Completion test,
Word Association test, and Thematic Apperception Test.

• An anecdotal report is a note written by the teacher regarding
incidents at the classroom that might need special attention in the
future.

• One of the strengths of an autobiography as a technique for
personality appraisal is it makes the presentation of intimate
experiences possible.

• Carl Roger is considered the main proponent of Non-Directive
counseling.

• Sharing the secrets of a counselee with other members of the
faculty is in violation of confidentiality.

• Counselors can break confidentiality rule in cases of planned suicide
or planned hurting/killing of somebody.

• Sinforoso Padilla is considered the father of counseling in the
Philippines.

• Portfolio is the pre-planned collection of samples of student works,
assessed results and other output produced by the students.

• Assessment is said to be authentic when the teacher gives students
real-life tasks to accomplish.

• The main purpose of a teacher using a standardized test is to engage
in easy scoring.

• Marking on a normative basis follows the normal distribution curve.

• A scoring rubric is important in a self-assessment to be effective.

• The main purpose of administering a pretest and a post-test to
students is to measure gains in learning.

• An assessment activity that is most appropriate to measure the
objective “ to explain the meaning of molecular bonding” for the
group with strong interpersonal intelligence is to demonstrate
molecular bonding using students as atoms.

• Emphasis on grades and honors goes with the spirit of “ assessment
of learning”.

• Split-half method and KuderRischardson measure internal
consistency of the test scores of the students.

• Test-retest measures the stability of the test scores.

• Parallel methodmeasures the equivalence.

• The expression “grading on the curve” means the performance of a
certain student compared to the performance of other students in
the group.

• Scoring rubrics has criteria of level of achievement to serve as
standard, has a clear description of performance in each level, and
has a rating scheme.

• When constructing a matching type of test the options must be
greater than the descriptions, the directions must state the basis of
matching, and the descriptions must be in Column A and options in
Column B.

• Extended Essay test can effectively measure HOTS cognitive
learning objectives.

• An objective test can cover a large sampling of content areas, timeconsuming
to prepare, and there is a single or best answer.

• Objective tests measures low-level thinking skills, such as
knowledge, comprehension, and application.

Download now

Join us now

Facebook group

Comments

Popular posts from this blog

LET Reviewer: General Education-English

LET Reviewer: How to compute the LET PASSING RATE?

Profed Final Coaching Summary 1