Page 45 - Special Topic Session (STS) - Volume 4
P. 45

STS560 Haniza Yon et al.
            2.  Methodology
                Respondents. We administered our instrument as an online survey. A link
            was sent to a sample of the target population (employees in the Malaysian
            financial services industry) by bank officers through e-mail. The participants
            were given a deadline by which to complete the instrument. The sample, which
            was randomly selected, was composed of 211 employees (84 males and 127
            females)  in  the  Malaysian  financial  services  industry.  The  respondents’
            experience in the industry ranged from a few months to more than 26 years.
            All participants were required to accept a data protection agreement before
            participating in the survey.
                Questions. The instrument called Work 4.0 which we developed, included
            questions  about  demographic  variables  such  as  gender,  ethnicity,  place  of
            work,  and  working  experience.  In  addition,  a  total  of  36  questions  were
            included  to  measure  behavioural  competencies  in  creativity,  innovation,
            entrepreneurship,  productivity,  problem-solving,  self-confidence,  empathy,
            emotional intelligence, and resilience. In the present context “faking” is likely
            to occur when using Likert-style items; this problem cannot be resolved using
            consistency  scales.  Therefore,  we  used  a  forced-choice  (FC)  approach  to
            combat  faking  (see,  e.g.,  Brown  &  Bartram,  2009;  Bartram  &  Burke,  2013,
            Hontangas et al., 2015, Bartram & Tippins, 2017). FC items are best suited to
            high-stakes situations with high demand characteristics, such as in personnel
            selection, when applying for bank credit, or when being forced to reveal other
            high stakes information – including value measurement (see, e.g., Brown &
            Bartram, 2009; Hontangas et al., 2015).
                IRT. We analysed our data with Winsteps (Linacre, 2019) software. We used
            the  one-parameter  logistic  model  to  measure  psychometric  and  statistical
            properties of our instrument, including its model fit and gender-related biases,
            and to assess its construct validity. The one-parameter logistic model is a kind
            of item response theory model. According to the Columbia University Mailman
            School of Public Health:
                The item response theory (IRT), also known as the latent response theory
                refers  to  a  family  of  mathematical  models  that  attempt  to  explain  the
                relationship between latent traits (unobservable characteristic or attribute)
                and  their  manifestations  (i.e.  observed  outcomes,  responses  or
                performance). They establish a link between the properties of items on an
                instrument, individuals responding to these items and the underlying trait
                being  measured.  IRT  assumes  that  the  latent  construct  (e.g.  stress,
                knowledge,  attitudes)  and  items  of  a  measure  are  organized  in  an
                unobservable continuum. (Item Response Theory, n.d.)
            For additional information about IRT, we refer the reader to van der Linden
            and Hambleton (1995).
                Gender Bias. IRT can be used to screen for bias for or against particular
            sub-groups of respondents. Bias can occur at the item level and at the test

                                                                34 | I S I   W S C   2 0 1 9
   40   41   42   43   44   45   46   47   48   49   50