English  |  正體中文  |  简体中文  |  Items with full text/Total items : 6491/11663
Visitors : 23811541      Online Users : 82
RC Version 3.2 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Adv. Search

Please use this identifier to cite or link to this item: http://ir.ncue.edu.tw/ir/handle/987654321/15193

Title: 檢定與分析多向性測驗的ITEM BIAS/DIF
A Statistical Procedure for Assessing Item Bias/DIF in Multidimensional Item Response Data
Authors: 李信宏
Contributors: 數學系
Keywords: Item bias;Differential item functioning;Item response theory;Unidimensionality;Multidimensionality;MULTISIB
Date: 1998
Issue Date: 2013-01-07T09:03:43Z
Publisher: 行政院國家科學委員會
Abstract: Most applications of DIF procedures have been based on the assumption that only one dominant latent ability is measured by the test. However, if more than one latent trait is relevant to the purpose of test, these DIF procedures may yield misleading results. In this poject, we propose a statistical procedure for assessing DIF of intentionally two-dimensional test data, such as a "math" test designed to measure lgebra ability and geometry ability. Our procedure, MULTISIB, is based on the multidimensional model of DIF as presented in Shealy and Stout(1993b), and is a direct extension of SIBTEST(Shealy and Stout, 1993a), its unidimensional counterpart. First, DIF is appropriately modeled to result from secondary dimensional influence from other than the two intended dimensions. A new statistic was defined and a smoothing approach was also used in estimating the variance of the statistic. A large scale simulation studies then were carried out to investigate the performance of our procedure to detect DIF in two-dimensional tests. Our specific objects include:
(1) Investigation the performance of our procedure when there is no DIF in the two-dimensional test;
(2) Investigation the performance of our procedure when there exists DIF in the test. The amounts of DIF are varied in different simulation setting;
(3) Comparisons of our procedure with other procedures;
(4) Applications of our procedure to real data like college entrance exam for example.
試題偏差(Item Bias or Differential Item Functioning)是指試題本身包含了某些對於特定群體的受試者(例如:男性或者女性,原住民與非原住民)有利的因素,使得這些特定群體有較顯著的優勢來回答此種試題,測驗的可靠性和公平性因而受到質疑。而所謂試題的單向性(Unidimensionality),是指試題本身只測量一種主要的特質或者能力的意思。目前關於試題偏差的研究方法,大都是建立在試題是單向性的假設上,然而當試題並非只是測量一種能力時,這些檢定試題偏差的方法便無法適用。例如:根據大考中心研究人員分析,大學聯考數學科試題至少測量二種以上之能力(例如:空間及幾何)。本計劃即是希望提出一個新法,在試題是多向性時(Multidimensional, 也就是測驗二種以上能力時),可以適用於檢驗、分析試題偏差。我們的理論基礎是建立在一個多向性的DIF模式上(Multidimensional DIFModel )。我們首先找出把受試者依各種能力得分分組的方法;依照分組求得能力的估計值;計算兩個群體間的能力估計值差異;估計試題偏差;然後求導出檢定統計量以及其漸近性質。接著我們將設計不同狀況來進行模擬研究(SimulationStudy),我們計畫同時做型一錯誤(Type I error)和檢定力(Power)的研究,希望能夠充分瞭解我們所提方法的性質,進而能夠應用於實際的多向性測驗資料。
Relation: 國科會計畫, 計畫編號: NSC87-2118-M018-003; 研究期間: 8608-8707
Appears in Collections:[數學系] 國科會計畫

Files in This Item:

File SizeFormat
2020101612001.pdf183KbAdobe PDF350View/Open

All items in NCUEIR are protected by copyright, with all rights reserved.


DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback