intelligent decision support in automating abet

23
Munich Personal RePEc Archive Intelligent Decision Support in Automating ABET Accreditation Processes: A Conceptual Framework Ahmad, Abdul-Rahim and Tasadduq, Imran A. and Imam, Muhammad Hasan and Al-Ahmadi, Mohammad Saad and Ahmad, Muhammad Bilal and Tanveer, Muhammad and Mahmood, Haider King Fahd University of Petroleum Minerals, Umm Al-Qura University, King Faisal University, Prince Sultan University, Prince Sattam bin Abdulaziz University 5 August 2021 Online at https://mpra.ub.uni-muenchen.de/109151/ MPRA Paper No. 109151, posted 23 Aug 2021 13:25 UTC

Upload: others

Post on 03-May-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Intelligent Decision Support in Automating ABET

Munich Personal RePEc Archive

Intelligent Decision Support in

Automating ABET Accreditation

Processes: A Conceptual Framework

Ahmad, Abdul-Rahim and Tasadduq, Imran A. and Imam,

Muhammad Hasan and Al-Ahmadi, Mohammad Saad and

Ahmad, Muhammad Bilal and Tanveer, Muhammad and

Mahmood, Haider

King Fahd University of Petroleum Minerals, Umm Al-Qura

University, King Faisal University, Prince Sultan University, Prince

Sattam bin Abdulaziz University

5 August 2021

Online at https://mpra.ub.uni-muenchen.de/109151/

MPRA Paper No. 109151, posted 23 Aug 2021 13:25 UTC

Page 2: Intelligent Decision Support in Automating ABET

1

1

Intelligent Decision Support in Automating ABET Accreditation

Processes: A Conceptual Framework

Abdul-Rahim Ahmad 1, Imran A. Tasadduq 2, Muhammad Hasan Imam 3, Mohammad Saad Al-

Ahmadi 1, Muhammad Bilal Ahmad 4, Muhammad Tanveer 5 and Haider Mahmood 6

1 KFUPM Business School, King Fahd University of Petroleum and Minerals, Dhahran, Saudi

Arabia. [email protected], [email protected] 2 Department of Computer Engineering, Umm Al-Qura University, Makkah, Saudi Arabia.

[email protected] 3 Department of Civil Engineering, Umm Al-Qura University, Makkah, Saudi Arabia.

[email protected] 4 Department of Computer Science, King Faisal University, Alhassa, Saudi Arabia.

[email protected] 5 Prince Sultan University, Riyadh, Saudi Arabia. [email protected] 6 Departyemnt of Finance, College of Business Administration, Prince Sattam bin Abdulaziz

University, 173 Alkharj 11942, Saudi Arabia. [email protected]

*Corresponding Author: Haider Mahmood. Email: [email protected]

Abstract

In the era of knowledge-based decision-making, there is an increasing need for administrators and

instructors of engineering degree programs to make informed decisions on the currency, relevancy,

and efficacy of instructional efforts. Academic accreditation through ABET provides the best

practices and the means to establish the requisite quality improvement processes. Various

regulatory and funding agencies for these engineering degree programs also actively call for

accreditations. Nevertheless, the tedious, time-consuming, resource-intensive, and knowledge-

based nature of these processes and decisions pronounces the need for a knowledge-based tool to

intelligently support and automate various pertinent activities. Such activities span from

information collection, aggregation, and analysis to decision analysis, support, and monitoring of

outcomes. We propose a conceptual framework for researching, designing, and developing such a

knowledge-based system. The proposed conceptual framework seeks to automate not only various

tedious and complex activities but also facilitate knowledge-based decision analysis/support in

continuous quality improvement processes. The focus is on the decisions, processes, and activities

related to Course Assessments, Course Learning Outcomes, and Student Outcomes. This

framework is expected to deliver systems that promise significant improvement in the efficiency

and efficacy of instructors, administrators, and accreditors.

Keywords: Expert Systems, Decision Support Systems, Intelligent Systems, Soft Computing,

Academic Accreditation, ABET Accreditation

Page 3: Intelligent Decision Support in Automating ABET

Introduction

Continuous Quality Improvement (CQI) in academic programs is widely seen as the key to

currency, relevancy, and efficacy of academic plans and efforts as well as the means of

encouraging impactful and innovative modifications. Academic accreditation through such

established and forward-looking agencies like Accreditation Board for Engineering and

Technology (ABET) provides the best practices as well as the means of establishing and improving

the requisite CQI processes. Accreditation is a voluntary, non-governmental process that includes

an external review of the ability of a school to provide quality programs. Accreditation reviews

consist of self-evaluations, peer-reviews, committee-reviews, and the development of in-depth

strategic plans along with reviews of a school’s mission, operations, faculty qualifications and contributions, curricula, and other critical areas [1-4]. As such, academic program accreditation is

often deemed a strategic management process. Consequently, the process in itself is frequently

considered an important and desirable practice, as it involves all stakeholders in consciously

seeking CQI and efficacious innovation [5, 6]. ABET also underscores accreditation as a value-

added process providing “assurance that the [graduating] professionals who serve us have a solid educational foundation and are capable of leading the way in innovation, emerging technologies,

and in anticipating the welfare and safety needs of the public” [7]. Academic program accreditation

is also treated as evidence that a degree program has not only meets certain necessary standards

but also produces graduates who are ready to enter their professions. In this regard, the

fundamental process undertaken by institutions striving for ABET accreditation is shown in Fig. 1.

Fig. 1. Basic processes involved in ABET Accreditation (©Rogers)

Accreditation provides students assurance that the quality of education they receive meets the

standards of the profession. The pursuit of accreditation provides an opportunity for academic

institutions to not only demonstrate their commitment to maintaining and improving the quality of

their programs but also directly and consciously involving faculty and staff in processes ensuring

the currency and the relevancy of programs as well as service to its mission. The intensive team

efforts involved in ABET accreditation review processes furnish valuable information that

programs can use to deliver the very best education as well as formalize and reinforce a

commitment to continuous quality improvement, innovation, and scholarship. Moreover,

accreditation provides benefits to the faculty and staff at the accredited schools by attracting quality

students, providing greater research opportunities, and allowing for global recognition. It also

assures the government and public that the public money is being used prudently. However,

Page 4: Intelligent Decision Support in Automating ABET

obtaining and maintaining accreditation is an extensive, tedious, and resource-intensive exercise,

sometimes making it prohibitively costly. Still, such a resource intensity and tedium do not

undermine intrinsic or extrinsic value of accreditation [6].

The complex and tedious nature of CQI/Accreditation processes has long made researchers and

practitioners suggest some kind of computerized automation solutions [8-14]. The most ambitious

of these propositions involve the vision of developing and integrating technologies for monitoring,

identifying, tracking student performance and program/curriculum management as well as data

collection, processing, and reporting aspects outlined in [12]. Still, little work has been done in

actually automating the CQI/Accreditation processes.

Literature Review

This research draws from literature in a wide array of synergistic disciplines including, but not

limited to, knowledge-based systems, decision support systems, systems analysis and design,

algorithm design, education management, etc. Here we focus more on the education management

from the perspective of ABET accreditation, highlighting the gap in the existing work and the need

for automating and supporting various CQI/Accreditation processes and decisions.

The sole purpose of all learning assessments is to improve the learning of students [12]. This is

achieved by collecting evidence from the learning assessments, analyzing the evidence, identifying

weak areas, and taking decisions based on the outcomes of the analysis aimed at improving student

learning. All these processes and decisions demand a systematic approach and a workable plan that

is implemented in the form of a closed-loop feedback system. In short, learning assessment plans

involve three components: developing goals, collecting credible evidence, and using the evidence

for improvement [6, 12]. The highly complex ABET assessment process emphasizes the use of

assessment to improve programs [15]. Since the process is complex, it has a considerable overhead

in terms of information collection, information processing, and decision-making [8].

The Learning Assessment Process

The basic components of an effective plan for assessment of learning are available in the literature

[12, 16-20]. A typical hierarchy is listed below and shown in Fig. 2: Program Mission/Goals,

Program Educational Objectives (PEOs), Student Outcomes (SOs), Course Learning Outcomes

(CLOs), Course Topics (CTs) [15, 21, 22].

Page 5: Intelligent Decision Support in Automating ABET

Fig. 2. Scope of the proposed decision support system

At the outset of all assessment plans, there are goals and objectives at various levels which should

be clearly identified and documented. Within the hierarchy of an assessment process, PEOs and

SOs are defined at the program level while CLOs are defined at the course level. Program Mission

and PEOs are developed with input from all the constituencies, subject to regular reviews and

updates, and consist “broad statements describing the career and professional accomplishments the

graduates will achieve few years after graduation” [7]. Subsequently, SOs are derived from the

Program Mission and PEOs. Essentially, SOs describe “... what students are expected to know and be able to do by the time of graduation” [23]. Fundamentally, PEOs and SOs relate to the

knowledge, skills, and behaviors that learners acquire as they progress through the academic

program.

At an advanced operational level, CLOs provide a formal statement of what students are expected

to learn in a course. These statements refer to specific knowledge, practical skills, areas of

professional development, attitudes, higher-order thinking skills, etc. that faculty members expect

students to learn, develop, or master during a course [24]. Simply stated, CLO statements describe

what faculty members want students to know and be able to do at the end of the course. CLOs

must be observable, measurable, and student-outcome focused. These CLOs, in turn, map to one

or more SOs [11, 12, 19]. However, at the very basic operational level, CTs are the actual topics

covered by instructors in a specific course. Indeed, instructors often design the course assessments

by considering the CTs covered in the course. Operationally, these CTs serve one or more CLOs.

Depending on the pedagogy of the instructor, these CTs are covered in a course at various levels

learning, such as those specified in Bloom’s Taxonomy.

Limitations of SO-based Learning Assessments

ABET requires measuring the performance of learners on all SOs. However, the approach of

designing assessments/questions that directly address SOs has multiple inherent limitations [11,

12]. For instance, a typical instructor would not necessarily be motivated to design assessment

questions to address the SOs, as the process may be complex and non-intuitive. Furthermore, if

the assessment questions are not carefully designed, the scores that students would get might not

truly reflect the learning of relevant outcomes. Usually, there are only one or two SO-related

assessment questions in some of the assessments, such as those on an examination. The rest of the

problems in the assessments may not be necessarily addressing any of the outcomes. In addition,

it is possible that am assessment question designed to address the relevant outcomes was trivial

Page 6: Intelligent Decision Support in Automating ABET

enough to not fully assess skills and learning of the student. Moreover, assessment components

may occasionally be focused so much on SOs that assessment may be only remotely related to the

actual course content. Consequently, the assessment would fall short of truly indicating the ability

of the learner in relevant technical concepts specific to the course [6].

Benefits of CLO-based Learning Assessments

Designing CLO-based learning assessments is relatively simpler and more intuitive, as in most

cases this can be done without defining performance benchmarks and is intuitively much closer to

the course contents [22]. Detailed discussions on the importance and benefits of CLOs and CLO-

based learning assessments can be found in [12, 22, 25]. Primarily, a typical instructor is more

comfortable with, and inclined towards, designing questions addressing CLOs, an approach that is

easier and more intuitive. Furthermore, CLOs derive directly from the course contents, and the

learning assessments consist of questions based upon actual course contents.

SO Assessment Using CLO Assessments

Imam and Tasadduq [12] proposed a new but pragmatic and effective approach for resolving the

issues involved in the direct assessment of SOs. In this approach, the instructor focuses only on the

CLOs and designs assessments that directly address CLOs. All the CLOs are, in turn, mapped on

specific SOs. Consequently, an instructor may indirectly address SOs by directly addressing the

CLOs. This traditional CLO-based approach to assessment of learning is easier and intuitive for the

instructor. Moreover, due to CLOs being closer to the actual CTs and contents than SOs, the CLO-

based assessment appears more relevant to the course and its contents from the perspective of

students. Consequently, the performance ratings of a student on the assessments present a relatively

more authentic picture of outcome-based learning in a specific course [15].

Indeed, data collected from direct learning assessments in various courses in the program need to

be aggregated, analyzed, and evaluated. This is a time consuming and cumbersome, nevertheless,

extremely important process, where the extents to which students have learnt the outcomes should

be ascertained as accurately as possible. Conceivably, there is a significant opportunity for

automating various activities and processes. Indeed, several publications discuss this issue of lack

of tools for converting CLO-based assessments to SO-based assessments. Still, there are no broad-

based and knowledge-based software tools available to facilitate accurate and meaningful results

[15].

Imam and Tasadduq [12] proposed a new but relatively more pragmatic and effective approach to

resolve the issues involved in the direct assessment of SOs. They also provide a new algorithm for

closing the CLO-SO gap and presented a simple spreadsheet-based implementation for automating

this conversion. In the proposed approach, the instructor focuses only on the CLOs and will design

questions and assessments that directly address CLOs. All the CLOs are, in turn, mapped on

specific SOs. Consequently, an instructor may indirectly address SOs by directly addressing the

CLOs. However, this approach also has several drawbacks, as discussed in the following.

CLO-SO Conversion Approach: Limitations

Despite all the merits of CLO-SO conversion process highlighted above, this scheme suffers from

its own drawbacks as discussed here. For instance, the conversion from CLO-specific data to SO-

specific data is very crude. Either a CLO fully addresses one or more SOs or it does not. When a

given CLO maps to more than one SO, at times an instructor designs a question that addresses only

one of the SOs. Since one must tag the question to the CLO, the question will incorrectly be linked

Page 7: Intelligent Decision Support in Automating ABET

to all the SOs connected with the CLO, even though the question is not focused on other SOs at all.

Instructors may find it cumbersome to tag every question to one of the CLOs. Consequently,

instructors must be mindful from the beginning of the course and ensure questions in all their

assessments be tagged to one of the CLOs. When instructors fall short of task, it becomes extremely

tedious to go back, tag the questions of their assessments and enter CLO-wise student marks to be

converted to SO-wise marks. If an instructor wishes to give an assessment to students, such as a

project, where the individual projects will address a different CLO, it will become almost

impossible to aggregate CLO-wise student marks for this assessment and apply the conversion

algorithm. When one specific CLO maps to more than one SO, a question designed to address the

CLO will also address all the SOs. Many ABET evaluators show their reservations when a question

addresses more than one SO [21, 22, 26].

The Need for Using CT-based Assessments

In order to address the pressing issues, we propose a new and better approach that will be integrated

in the proposed automation system. In the new approach, individual CT will be tagged so as to

belong to a Bloom’s Level (BL), some CLOs, and some SOs. The tagging will be done in such a

way that it would also take care of any subjectivity in relationships between CLOs and SOs. For

example, a CT might address a given SO very strongly with another CT loosely coupled with the

same SO. In such cases, tagging may also specify some subjective strength of relationships such as

strong, medium, weak, etc. The proposed system will employ an intelligent fuzzy mechanism for

taking care of such subjectivity and uncertainty in the relationship strengths among CTs, BLs,

CLOs, and SOs.

Such an approach of CT-based assessments and employing tags to link CTs to BLs, CLOs, and SOs

has many advantages. For instance, instructors are much more comfortable in linking their questions

to CTs rather than to CLOs or SOs than mapping questions to CLOs. Tagging is an intuitive and

increasingly popular approach used by computer and social networking web site users. The issues

arising from a question addressing multiple SOs will be resolved. The issues arising from

incorrectly tagging a question to some SO will be resolved. The mapping of CT-based questions to

CLOs and SOs will be not only highly intuitive but also highly effective. Assessment data received

from instructors will be more reliable, rendering actions taken on basis of this data more efficacious.

It will have a both positive and enduring impact on CQI, student learning and tracking, and

instructor tracking.

Automation for Academic Accreditation Process

At the strategic level, CQI processes involve measuring and monitoring the overall efficacy,

currency, and relevancy of the academic program offered. Accreditation standards form the basis

for evaluating the mission, operations, faculty qualifications and contributions, programs, and other

critical areas. At the tactical level, accreditation processes include self-evaluations, peer-reviews,

committee-reviews, and the development of in-depth strategic plans. These activities also include

internal and external reviews of a school’s mission, faculty qualifications, and curricula. At the operational level, accreditation processes involve measuring and monitoring outcome-based

learning through CLOs and SOs. Indeed, accreditation processes at all these levels require

sophisticated knowledge-based decision support and automation. Fig. 2 depicts these strategic,

tactical, and operational levels of decisions and actions. The objectives of the automation would

include:

Page 8: Intelligent Decision Support in Automating ABET

• Automating various critical processes pertinent to continuous quality improvement and

accreditation in engineering and technology programs

• Designing and implementing centralized database and knowledge-base of pertinent data and

knowledge for facilitating automation and knowledge-based decision support

• Automating the conversion of student performance data on CLOs to SOs for ABET

accreditation

• Automating the performance tracking, in terms of learning and teaching, of learners and

instructors concerning various CLOs and SOs

• Automating the generation of course reports, course improvement plans, and remedial plans

for learners

Conceivably, in the era of knowledge-based decision-making, there is an ever-increasing need for

administrators and instructors in engineering and technology degree programs to have access to

advanced decision support and automation tools. The need for automation is more pronounced in

the areas of CLOs and SOs. As per ABET, student performance on various SOs in different courses

must be assessed [15, 26]. Conceivably, if the assessment questions are not carefully designed then

the performance of students may not truly reflect student learning of the relevant outcomes,

defeating the purpose of the whole CQI/Accreditation exercise [11]. Nevertheless, designing course

learning assessments directly relevant to SOs or CLOs is neither intuitive nor trivial due to the

cognitive and information overload faced by the instructors. Various other drawbacks of such an

approach of directly measuring performance on various SOs are discussed in [12]. In addition, there

are drawbacks of both CLO- and SO-based assessments, strongly pointing towards a CT-based

assessment approach [22, 26].

In short, a CT-based assessment of student performance is highly recommended. However, the

conversion of CT-based assessments to CLO- or SO-based assessments is neither intuitive nor

automatic. In order to address these pressing issues, we propose a new approach that will be

integrated in the proposed automation system. In the new approach, individual CTs are tagged with

specific CLOs, SOs, as well as the relevant Bloom’s Level (BL) in the Bloom’s taxonomy [27].

Naturally, these CT-based assessments would be converted to CLO, SO, and BL pertinent numbers.

However, a typical undergraduate program contains around seven SOs and spans around 40 courses.

In addition, each course typically has 6-12 CLOs and 10-20 broad CTs. Furthermore, each CT and

CLO, in turn, belongs to one of the six BLs. Conceivably, the combinatorial complexity of

connections reaches in tens of thousands or more. Consequently, it renders accurate and inclusive

manual conversion of CT-based assessments to CLO- and SO-based assessments beyond normal

cognitive and information processing capabilities of humans. Nevertheless, the comparison

between the traditional and the proposed approaches with reference to student learning clearly

supports the intuitiveness as well as the effectiveness of the proposed CT-based assessment

approach [12].

Naturally, processes involving such tedium require specialized tools for automating information

collection, analysis, and aggregation as well as for furnishing sophisticated decision support. We

propose to research, design, and develop such a sophisticated knowledge-based automation and

decision support tool. The proposed system for automation and knowledge-based decision support

will employ intelligent soft computing and fuzzy inferencing mechanisms to take care of the

subjectivity and uncertainty in the data specifying relationship between CTs and CLO, SO, BL, etc.

The scope of the framework spans the automation and knowledge-based decision support in

Page 9: Intelligent Decision Support in Automating ABET

operational and tactical processes and activities related to CTs, CLOs, and SOs, as depicted in Fig.

2. It involves developing and implementing novel methodologies for generating CLO- and SO-

based performance data utilizing CT-based assessments. Strategic-level activities, often comprising

highly subjective decision-making, are currently excluded from the scope.

ABET / CQI Automation: Existing Research

Existing work on automating CLO-based learning assessments to SO-based learning assessments

include the one described by [12], which uses a new algorithm for converting the CLO-based data

into SO-based student performance indicators. They also present a spreadsheet-based tool that

makes this conversion simpler for instructors and facilitates data entry, aggregation, and analysis.

Despite its limitations discussed later, the spreadsheet-based tool has been used in various

departments of Umm Al-Qura University. The successful use of the tool justifies the investment in

researching, designing, developing, and deploying a knowledge-based system for automating

CQI/Accreditation processes (KSA-Accredit) and providing effective decision support to

instructors and administrators.

In [15], fuzzy logic has been applied to evaluate student performance. It has been shown that useful

data on student learning can be obtained on ABET SOs using the proposed approach. Expert system

can also be used to assess the student learning outcomes. This has been shown in [28]. Continuous

improvement of a program is an essential element of ABET accreditation. Therefore, it is important

to devise strategies that make this task easy for the faculty. Authors in [21, 26] have shown that an

expert system can be used to generate course improvement plans automatically. Equally important

is designing effective assessments. In another article [22], the same authors have shown the

designing of effective assessments using an expert system.

As mentioned, spreadsheet-based tools have some major limitations. First, spreadsheets quickly

become cumbersome when dealing with a large amount of data coming from many courses.

Second, dealing effectively with multiple themes of data and knowledge, as is the case in

CQI/Accreditation processes, requires the use of some kind of database component in the system

[29]. Creation of centralized database and knowledgebase for CT-, CLO- and SO-pertinent data

and knowledge would enable integrated performance views of each student, instructor, CT, CLO,

SO, etc., as discussed in subsequent paragraphs.

Centralized Knowledge-base/Database

Creation of a central database of all course and program information and assessment results has

the promise of offering the much-desirable data integrity, data availability, data transparency, as

well as collaborative and data mining capabilities [6, 29, 30]. It will facilitate making the desired

information available at the requisite level of details/granularity [29].

In CQI/Accreditation processes and decisions, the need for a knowledge-based decision support

system cannot be overemphasized. The existing approaches towards some kind of automation in

course reporting processes are devoid of knowledge-based decision support capabilities [12].

However, the knowledge-intensive nature, absence of accurate models capturing complex

decision-making dynamics, and non-availability of expert advice in a timely or economical fashion

highlight the need for resorting to some knowledge-based decision support and expert system

methodologies in CQI and accreditation processes [31, 32]. Indeed, decision support and expert

system technologies have been successfully developed and deployed in such diverse disciplines as

Engineering, Business, Mining, Medicine, etc. [32-34]. However, little efforts have been expended

in employing decision support and expert system methodologies in the important area of CQI and

Page 10: Intelligent Decision Support in Automating ABET

academic accreditation.

It is important to note that the proposed KSA-Accredit, by virtue of its knowledge-based decision

support capabilities, will help identify any CT-, CLO-, or SO-specific weaknesses in individual

students and furnish an opportunity to academic leaders and facilitators about making informed,

objective, and timely remedial decisions and actions [6]. In the same league, KSA-Accredit will

provide useful information on instructor-specific performance in terms of success in achieving

various CLOs and SOs. This would not only enable instructors to set up their own professional

development goals but also enable administrators conduct informed conversations with individual

instructors on their professional performance as well as opportunities for improvement and

professional development/support.

ABET / CQI Automation: Existing Automation Solutions

As already mentioned, in an assessment plan, the data collected from direct assessments in various

courses need to be aggregated, analyzed, and evaluated. Such an evaluation process is time

consuming and arduous for both instructors and administrators. Indeed, several publications discuss

this issue of time and resource intensity of these processes as well as propose automation solutions.

For example, Burge and Leach [9] present a tool based on Excel macros to allow automatic

determination of the degree to which individual students meet the learning objectives that indicate

how well students meet the course objectives and program directives which is equivalent to

evaluating the CLO and SO satisfaction.

Essa et al. [10] present a web-based software tool called ACAT (ABET Course Assessment Tool)

to keep students’ records and generate various reports. Ringenbach [35] presents another web-based

tool Web-CAT (Course Assessment Tool) that mainly manages students’ data. Despite relatively more user-friendliness and richness of features compared to ACAT, Web-CAT lacks the

functionality for effectively assessing the CLOs and SOs required for ABET accreditation. Haga et

al. [36] and Morrell et al. [37], present a database management system for tracking course

assessment data and reporting related outcomes for program assessment. Urban-Lurain, et al. [38]

also present a database management system has for storing large assessment data of students. These

systems are useful in keeping track of historical data, querying stored information, and managing

large amounts of data. However, these systems lack not only the requisite knowledge-based analysis

and mining facilities but also tangible decision support capabilities in automating the CLO- and

SO-related activities and processes.

Imam and Tasadduq [11] report a spreadsheet-based course reporting and CLO-SO conversion

system that automates some tedious course assessment and reporting activities related to

CQI/Accreditation. The system has been successfully used in various programs at Umm Al-Qura

University, Saudi Arabia. Despite users' reported acceptance of the system, the system has various

laggings, including the cumbersome and error-prone nature of various requisite manual processes.

The most advanced system available so far is proposed in [12]. This system was implemented as a

commercial software with the name CLOSO [39]. CLOSO automates the course-reporting process

and provides instructors the convenience of entering most of the requisite data through various

selections and file upload operations. It also allows instructors to enter student survey data as well

as their own feedback. Experience of using this software in various programs at Umm Al-Qura

University, Saudi Arabia, indicates substantial efficiency in the course reporting processes as

substantial enthusiasm towards using the software. However, CLOSO only performs direct

conversion of CLO-data to SO-data, which has its own inherent limitations, as discussed earlier.

Furthermore, the user interface of CLOSO is not very effective and offers opportunities for

Page 11: Intelligent Decision Support in Automating ABET

improvement. In addition, CLOSO does not provide any knowledge-based decision support to

instructors and administrators. Nonetheless, the experience of using the CLOSO by [12] in the

actual operating environment is considered a proof of concept for KSA-Accredit. Indeed, this

system is expected to fill many remaining gaps in the existing systems.

The existing automation solutions described earlier have severe limitations [11, 12]. Most existing

systems are little more than an attempt to automate or formalize course record-keeping activities

while relying on a large amount of manual data inputs in an error-prone manner using ineffective

user interfaces [6]. The absence of the requisite scalability, collaborative tools, and CQI decision

support limits the use at large universities. The situation is complicated due to the lack of error

detection, error correction, error reporting, or data validation facilities. Above all, these systems

do not offer any functionality to convert CT-based assessments to BL-, CLO-, and SO-based

assessments required for CQI/Accreditation processes. The absence of an intelligently designed

centralized database, knowledge-base, and the requisite data mining capabilities is another

handicap [6]. In addition, existing systems do not provide any integrated reporting mechanism for

assessing the overall achievements of individual learners on various SOs. The unavailability of

tools for choosing the requisite information level and granularity for various levels of decisions

severely limits objective and informed interventions aimed at the professional development of

learners and instructors [6].

Framework for Intelligent Automation

This framework builds on the Expert System (ES) paradigm for facilitating intelligent decision

support in automating accreditation processes. The emphasis on the development of tools that could

supplement the knowledge, experience, design intuition, and cognition of human experts. This

choice of ES as the bases for the framework emanates from such inherent characteristics of an ES

as the encoded knowledge, the separation of domain knowledge from the control knowledge, the

ability to reason under uncertainty, the explanation facility, the knowledge acquisition capability,

and the interactive user interface. These characteristics will be elucidated within the discussion on

the proposed framework.

Automation can be realized to different degrees at various levels. It can include automation in the

generation of assessments. It can also include calculation of the satisfaction level on various CLOs

for individual learners as well as for a group of learners. Further, automation could involve the

calculation of the satisfaction level on various SOs for individual learners as well as for a group of

learners. In addition, automation may span generation of course improvement plans. Moreover,

some automation in generation of remedial plans for individual learners is a possibility. Indeed, the

scope of such automation is only limited by the capabilities incorporated into the system. Once

again, these plans will serve as decision alternatives, which can be interactively adapted by the

human decision-maker. Consequently, such a system will only serve as an interactive and intelligent

decision support tool.

The proposed system, depicted in Fig. 3, consists of several modules working in tandem to provide

intelligent decision support in automation and streamlining of ABET accreditation processes. Here

we provide a somewhat generic discussion on various components of the framework.

Page 12: Intelligent Decision Support in Automating ABET

Fig. 3. The RandD Framework for the proposed system (KSA-Accredit)

Intelligent Assessment Designer (IAD)

We envisage a Genetic Algorithm (GA) based approach for building an IAD by employing various

rules and heuristics. The intelligence aspect comes from the employment of penalties/rewards or

preference weights, furnished by a fuzzy inference module (FIM), to evaluate the fitness value. The

primary task involved in automating the assessment process is to produce superior assessment

alternatives for further consideration by assessors [40]. In similar complex problems, studies

indicate GAs are a promising search and optimization approach [41]. The system should incorporate

expert knowledge and user preferences in the process through composite fitness functions of the

IAD. This fitness function would utilize crisp preference weights furnished by the FIM.

For instance, IAD could generate several alternative assessments for an instructor to choose and

modify. This modification could come in as simple form as the user selecting or deselecting certain

assessment components/questions while observing the degree to which assessment is linked to

different CTs, CLOs, and SOs, etc. The interactive UI will permit the user to interactively adapt

elements of the task at hand while observing the impact of those changes on various performance

measures. For instance, while preparing an assessment, the user could adapt the assessment

components based on individual perspective and preferences, all the while monitoring the effect

of those changes on the degree to which assessment is meeting various criteria such as the degree

to which the assessment covers certain CLOs or SOs.

Fuzzy Inferencing Module (FIM)

Here we provide discussions on modeling of, and inferencing from, subjective and uncertain

preferences as well as the design, implementation, and working of the FIM. An inference engine is

the brain of any ES and contains general algorithms capable of manipulating and reasoning about

the knowledge stored in the knowledge base by devising conclusions [34]. Ideally, the inference

engine is distinct from the domain knowledge and is largely domain independent.

Frequently, the major problem in building intelligent systems is the extraction of knowledge from

human experts who think in an imprecise or fuzzy manner. The same is true with the assessment

design problem where the associated knowledge is often imprecise, incomplete, inconsistent and

uncertain. Within the context of KSA-Accredit, the term imprecision refers to values that cannot

be measured accurately or are vaguely defined. Likewise, incompleteness implies the unavailability

of some or all of the values of an attribute, inconsistency signifies the difference or even conflict in

the knowledge elicited from experts, and uncertainty suggests the subjectivity involved in

Page 13: Intelligent Decision Support in Automating ABET

estimating the value or validity of a fact or rule. For instance, fuzzy logic (FL) will help convert

subject evaluation of learning achievements (like high, medium, low or grades like A, B, C) to crisp

quantitative values for decision-making regarding these achievements. Another instance could be

related to the CLO-SO map, where crisp connection weights between a CLO and an SO can be

generated using the subjectively specified low/medium/high connections between a CLO and an

SO [15].

The inherently vague, imprecise, and possibly conflicting nature of many assessment preferences

implies FL as a strong option for not only modeling the system dynamics but also implementing

FIM. Indeed, the ability of FL to work with imprecise terms has efficaciously been employed in

automated reasoning in ES for various subjective work-domains. As such, the underlying core in

KSA-Accredit inferencing uses an FIM comprising of fuzzy sets, rules, and preferences for

obtaining penalties and rewards in the assessment fitness evaluation function for ranking and

comparison purposes as well as for the automatic generation of assessment alternatives. The

potential arises from the fact that FL provides a very natural representation of human

conceptualization and partial matching. Indeed, the human decision-making process inherently

relies on common sense as well as the use of vague and ambiguous terms. FL provides means for

working with such ambiguous and uncertain terms [32]. Consequently, an FL-based FIM is

expected to deliver much flexibility in the automated accreditation process. As such, we deem FIM

as one of the core components, along with IAD, in tackling and automating the accreditation process

as well as in furthering the research in this important area.

The core concept involves employing a FIM comprising of fuzzy sets, rules, and preferences in

obtaining penalties and rewards for the hybrid fitness evaluation functions as well as various critical

parameters for IAD and preference discovery module (PDM) – explained in IV-C. The primary

benefit of a fuzzy rule-based system is that its functioning mimics more of human expert rules. The

traditional rigid and myopic fitness functions do not serve well in such complex, subjective, and

uncertain problem domains. Indeed, multi-criteria fitness functions are deemed more appropriate

for automatic generation, evaluation, and comparison of assessment alternatives. Indeed, KSA-

Accredit could have provisions for decision-makers to specify parameters in both crisp and fuzzy

manner, thereby increasing the flexibility and the ease with which decision-makers may creatively

adapt their preferences.

Preference Discovery Module (PDM)

The reliability and effectiveness of FIM significantly depends on the reliability of preferences. The

task of extracting knowledge from experts is extremely tedious, expensive, and time consuming. In

this regard, the implicit and dynamic nature of preferences, as well as efforts required for building

and updating an ES, underscore the need for automated learning. Indeed, learning is an important

constituent of any intelligent system [32]. However, a traditional ES cannot automatically learn

preferences or improve through experience.

An automated learning mechanism can improve the speed and quality of knowledge acquisition as

well as the effectiveness and robustness of ES. Incidentally, Artificial Neural Networks (ANN)

have been proposed as a leading methodology for such data mining applications. ANN can

especially be useful in dealing with the vast amount of intangible information usually generated in

subjective and uncertain environments. The ability of ANN to learn from historical cases or

decision-makers’ interaction with assessment alternatives can automatically furnish some domain knowledge and design rules, thus eluding tedious and expensive processes of knowledge

Page 14: Intelligent Decision Support in Automating ABET

acquisition, validation and revision. Consequently, the integration of ANN with ES can have the

ability to solve tasks that are not amenable to solution by traditional approaches [32].

Fortunately, the problem at hand is amenable automatic learning of non-quantifiable and dynamic

design rules from past cases. Furthermore, it is possible to automatically and incrementally learn

some decision-makers’ preferences from their evaluation and manipulation of accepted or highly ranked assessments using some online ANN or Reinforcement Learning based validation agent.

For instance, some supervised learning mechanism using user ratings of some past

assessments/cases can learn about the individual preferences of the user and use this learning to

generate more personalized assessment alternatives. Further, some unsupervised learning

mechanisms can learn from interaction data captured during changes made by the user and adapt

preferences related to the individual.

Knowledge-Base (KB)

Indeed, knowledge is deemed the principal ingredient in an ES [42]. The conceptual model of the

elicited knowledge is converted to a format amenable to computer manipulation through a process

called the Knowledge Representation [32]. Typically, knowledge elicitation continues throughout

the lifecycle of the ES development and deployment as knowledge may be incomplete, inaccurate,

and evolutionary in nature. The knowledge of KSA-Accredit will consist of facts and heuristics or

algorithms. Further, it will contain the relevant domain specific and control knowledge essential for

comprehending, formulating, and solving the underlying problems. There are various ways of

storing and retrieving preferences/rules, including ‘If-Then’ production rules. Representing

knowledge in the form of such traditional production rules enhances the system's modularity and

prompted us to adopt this approach. However, conventional logic-based representation does not

allow the simple addition of new decision rules to the ES without any mechanism for resolving

conflicts, thus resulting in inflexibilities that are not conducive to the proposed system. This

furnishes another reason for our vision of using FL modeling preferences and building the FIM.

Within the context of ABET accreditation, the KB contents would include, but not limited to, CTs,

CLOs, SOs, CLO-SO maps, relevant tagging, course syllabi, course survey instruments, a pool of

actual assessment questions and answers, rules of determining achievements on relevant CLOs

and SOs, etc. Indeed, KB would more likely be a growing body that could be adapted to changes

and innovations in aspects of curriculum.

Knowledge Acquisition Module (KAM)

Knowledge acquisition is the accumulation, transmission, and transformation of problem solving

expertise from experts or knowledge repositories to a computer program to create and expand the

knowledge base [34]. It should be noted that knowledge acquisition is a major bottleneck in the

development of an ES [43]. It is primarily due to mental activities at the sub-cognitive level that are

hard to verbalize, capture, or even become cognizant while employing the usual cognitive approach

of knowledge acquisition from experts [32]. As such, the task of extracting knowledge from an

expert is extremely tedious and time consuming. For instance, knowledge elicitation through

interviews generates an estimated two to five usable rules a day [43].

Knowledge could be derived from domain experts, the existing knowledge, as well as through

some automated machine learning mechanism. We propose to build PDM in a manner that could

provide knowledge about user preferences in a form that is readily usable by IAD and FIM. More

likely, the KAM is accessible to the domain expert, the program accreditation coordinator, who

add CTs, CLOs, SOs, Course Syllabi, course survey instruments, etc. to the KB. Another instance

Page 15: Intelligent Decision Support in Automating ABET

could be the entering of CLO-SO map. The Subject Matter Experts may also use KAM to add to

the pool of assessment questions, CTs, tagging of CTs with relevant CLOs, tagging of CLOs with

relevant SOs, and so on.

Explanation Module (EM)

The ability to trace responsibility for conclusions to the sources is crucial to transfer of expertise,

problem solving, and even acceptance of proposed outcomes [21]. The EM should trace such

responsibility and explain the behavior of the ES by interactively answering questions. For instance,

the EM could enable a user determine why a piece of information is needed or how conclusions are

obtained. In its simplest form, EM could furnish the sequence of rules that were fired in reaching a

certain decision. Indeed, the capability of an ES to explain the reasoning behind its

recommendations is one of the main reasons in choosing this paradigm over other intelligent

approaches for the implementation of our concept. EM is crucial not only from system development

but also from user acceptance and adoption perspectives.

Once again, a well-designed, interactive, and effective user interface is an important ingredient in

enabling a good explanation facility. In addition, incorporation of effective explanation capabilities

is elusive without conducting a meticulously designed empirical study with actual users.

User Interface (UI)

The UI delimits the manner in which systems interact with the user. The need for an interactive

and user-friendly UI is deemed to be an important factor in rendering the system easy to learn and

easy to use “… since to the end-user the interface is the system” [44]. Furthermore, research has

shown that interface aesthetics, as well as interactivity, perform a larger role in users’ attitudes than users will admit [45]. As such, the perceived usefulness of the interface, or users' perception

about the usefulness of the interface in a given work domain, plays an implicit role in longer-term

user acceptance and performance [45, 46]. Accordingly, we envision an adaptive and interactive

graphical user interface (GUI) for the system [47-60].

The UI would have the ability to accept input for the assessment design from data files. It should

have the provision for manual data entry or overriding of preferences from decision makers.

Moreover, it should enable easy, fast, informed and interactive manipulation of assessment

alternatives by the decision-maker [61-69]. We propose the UI be designed, developed, and tested,

using the philosophy of Ecological Interface Design and various Usability and Human-Computer

Interaction guidelines. Further, the UI design should afford information about the context through

various textual, graphical, analogical, and iconic references.

The Synergy of Intelligent Components

The proposed framework differs from a traditional ES in various intelligent components.

Consequently, we deem it beneficial to elaborate the philosophy and synergic potential of such

intelligent components. This is because of our belief that these components furnish a significant

amount of realizable automation in generating and manipulating superior assessment alternatives

by addressing the core issues in building the whole system. Furthermore, these components furnish

a vehicle for carrying out further research in this direction [70-74].

The need for intelligent components arises from limitations of conventional systems design

techniques that typically work under the implicit assumption of a good understanding of the process

dynamics and related issues. Conventional systems design techniques fall short of providing

satisfactory results for ill-defined processes operating in unpredictable and noisy environments such

Page 16: Intelligent Decision Support in Automating ABET

as assessment design. Consequently, the use of such non-conventional approaches FL, ANN, and

GA is required. The knowledge of strengths and weaknesses of these approaches could result in

hybrid systems that mitigate limitations and produce more powerful and robust systems [32, 47,

48]. Indeed, the potential of these technologies is limited only by the imagination of their users [75-

83].

Among the intelligent components of KSA-Accredit, IAD generates superior assessment

alternatives based on pre-specified and user-specified constraints and preferences as well as

preference weights furnished by FIM. The FIM incorporates the soft knowledge and reasoning

mechanism in the inference engine. The PDM complements the IAD and the FIM by automatically

discovering and refining some preferences. In this synergy, the FIM receives fuzzy preferences and

rules from various sources including domain experts, the knowledge base, and the PDM. These

fuzzy preferences and rules are defuzzified by the FIM through its inferencing mechanism,

furnishing crisp weights for use in the IAD. The IAD generates superior assessment alternatives for

ranking and adaptation by users. The assessment alternatives generated by the IAD could be

validated by the user or by the PDM. Consequently, the experts’ ranking of assessment alternatives

would serve as learning instances for updating and refining the knowledge-base, fuzzy rules, and

preferences. Incremental learning technologies like Reinforcement Learning might prove useful

here [6].

These intelligent components combine powers of the three main soft computing technologies

representing various complementary aspects of human intelligence needed to tackle the problem at

hand [48]. The real power is extracted through the synergy of expert system with fuzzy logic,

genetic algorithms, and neural computing, which improve adaptability, robustness, fault tolerance,

and speed of knowledge-based systems [32, 47, 48]. We have deliberately kept our discussion on

individual modules in KSA-Accredit largely generic in character. The rationale is that the actual

details of an accreditation process automation would depend on the goals of the actual system,

which would be different from system to system.

Conclusion

We have described the problem of accreditation process automation, its significance and relevance,

and the role intelligent systems and soft computing tools can play in improving the efficacy and

efficiency of the automation of the accreditation processes. In particular, we have framework for

a novel intelligent approach to solving this important and intricate problem. Our framework

involves the use of human intuition, heuristics, metaheuristics, and soft computing tools like

artificial neural networks, fuzzy logic, and expert systems. We have explained the philosophy and

synergy of the various intelligent components in the framework. The framework contributes to the

domain of intelligent automation of accreditation processes by enabling explicit representation of

experts’ knowledge, formal modeling of fuzzy user preferences, and swift generation and refinement of superior assessment alternatives. This framework would help develop systems that

would significantly improve the efficiency and efficacy of instructors and administrators involved

in continuous quality improvement and accreditation activities at engineering and technology

related academic programs. Further, it would promote acceptance of similar activities in other

disciplines.

Data Availability

No data were used to support this study.

Page 17: Intelligent Decision Support in Automating ABET

Acknowledgement: The authors acknowledge the support of NSTIP Saudi Arabia, KFUPM

Business School, and Umm Al-Qura University in conducting this work. We also acknowledge

the technical advisory and support from Design and Analysis Inc., Jeddah, Saudi Arabia.

Funding Statement: The authors received funding from NSTIP Saudi Arabia for designing and

writing of this research [Grant ID: Maarifaa-13-INF1094-10].

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding

the present study.

References

1. D. J. Salto, "Beyond national regulation in higher education? Revisiting regulation and

understanding organisational responses to foreign accreditation of management education programmes,"

Quality in Higher Education, pp. 1-16, 2021.

2. M. J. Manatos and J. Huisman, "The use of the European Standards and Guidelines by national

accreditation agencies and local review panels," Quality in higher education, vol. 26, no. 1, pp. 48-65, 2020.

3. M. Andreani, D. Russo, S. Salini, and M. Turri, "Shadows over accreditation in higher education:

some quantitative evidence," Higher Education, vol. 79, no. 4, pp. 691-709, 2020.

4. W. Rashideh, O. A. Alshathry, S. Atawneh, H. Al Bazar, and M. S. AbualRub, "A Successful

Framework for the ABET Accreditation of an Information System Program," Intelligent Automation and

Soft Computing, vol. 26, no. 6, pp. 1285-1306, 2020.

5. F. David, F. R. David, and M. David, Strategic management: A competitive advantage approach,

17th ed. Pearson–Prentice Hall Florence, 2019.

6. A. R. Ahmad, "Knowledge-based Automation and Decision Support in Operational and Tactical

Aspects of Continuous Quality Improvement Processes for Academic Accreditations," Working Paper,

KFUPM Business School, King Fahd University of Petroleum and Minerals, 2021.

7. ABET. (2021, 19 February). ABET Accreditation. Available: http://www.abet.org/accreditation/

8. R. J. Leach, "Analysis of ABET accreditation as a software process," in ACM SIGCSE Bulletin,

2008, vol. 40, no. 3, pp. 356-356: ACM.

9. L. L. Burge and R. J. Leach, "An advanced assessment tool and process," in Proceedings of the

41st ACM technical symposium on Computer science education, 2010, pp. 451-454: ACM.

Page 18: Intelligent Decision Support in Automating ABET

10. E. Essa, A. Dittrich, S. Dascalu, and F. C. Harris, "ACAT: a web-based software tool to facilitate

course assessment for ABET Accreditation," in Information Technology: New Generations (ITNG), 2010

Seventh International Conference on, 2010, pp. 88-93: IEEE.

11. M. Imam and I. A. Tasadduq, "Satisfaction of ABET Student Outcomes," in Global Engineering

Education Conference (EDUCON), 2012 IEEE, 2012, pp. 1-6: IEEE.

12. M. H. Imam and I. A. Tasadduq, "Evaluating the satisfaction of ABET student outcomes from

course learning outcomes through a software implementation," International Journal of Quality Assurance

in Engineering and Technology Education (IJQAETE), vol. 2, no. 3, pp. 21-33, 2012.

13. Farhanullah et al., "An E-Assessment Methodology Based on Artificial Intelligence Techniques to

Determine Students’ Language Quality and Programming Assignments’ Plagiarism," Intelligent Automation and Soft Computing, vol. 26, no. 1, pp. 169-180, 2020.

14. P. Wang, H. Cai, and L. Wang, "Design of Intelligent English Translation Algorithms Based on a

Fuzzy Semantic Network," Intelligent Automation and Soft Computing, vol. 26, no. 3, pp. 519-529, 2020.

15. M. H. Imam, I. A. Tasadduq, A.-R. Ahmad, and F. Aldosari, "Obtaining ABET student outcome

satisfaction from course learning outcome data using fuzzy logic," EURASIA Journal of Mathematics,

Science and Technology Education, vol. 13, no. 7, pp. 3069-3081, 2017.

16. K. D. Blaha and L. C. Murphy, "Targeting assessment: how to hit the bull's eye," Journal of

Computing Sciences in Colleges, vol. 17, no. 2, pp. 112-121, 2001.

17. C. Browning and S. Sigman, "Problem-based assessment of the computer science major," Journal

of Computing Sciences in Colleges, vol. 23, no. 4, pp. 189-194, 2008.

18. R. M. Felder and R. Brent, "Designing and teaching courses to satisfy the ABET engineering

criteria," JOURNAL OF ENGINEERING EDUCATION-WASHINGTON-, vol. 92, no. 1, pp. 7-26, 2003.

19. T. G. Wang, D. Schwartz, and R. Lingard, "Assessing student learning in software engineering,"

Journal of Computing Sciences in Colleges, vol. 23, no. 6, pp. 239-248, 2008.

20. K.-B. Yue, "Effective course-based learning outcome assessment for ABET accreditation of

computing programs," Journal of Computing Sciences in Colleges, vol. 22, no. 4, pp. 252-259, 2007.

21. M. H. Imam, I. A. Tasadduq, A.-R. Ahmad, F. Aldosari, and H. Khan, "Automated Generation of

Course Improvement Plans Using Expert System," International Journal of Quality Assurance in

Engineering and Technology Education (IJQAETE), vol. 6, no. 1, pp. 1-12, 2017.

22. M. H. Imam, I. A. Tasadduq, M. H. Khan, A.-R. Ahmad, and F. Aldosari, "Assessment Design

Through an Expert System and Its Application to a Course of Hydraulics," in Proceedings of 2016 Canadian

Engineering Education Association (CEEA), Halifax, Canada, 2016.

Page 19: Intelligent Decision Support in Automating ABET

23. ABET. (2021, 19 February). Student Outcomes. Available:

http://www.abet.org/accreditation/accreditation-criteria/criteria-for-accrediting-engineering-programs-

2016-2017/#outcomes

24. L. Suskie, Assessing student learning: A common sense guide, 3rd ed. John Wiley and Sons, 2018.

25. P. Race, Designing assessment to improve physical sciences learning. Higher Education Academy

Hull, 2009.

26. M. H. Imam, I. A. Tasadduq, A. R. Ahmad, F. Aldosari, and H. Khan, "Automated Generation of

Course Improvement Plans Using Expert System," in Intelligent Systems: Concepts, Methodologies, Tools,

and Applications, M. Khosrow-Pour, S. Clark, M. E. Jennex, A. Becker, and A.-V. Anttirioko, Eds. Hershey,

PA: IGI Global, 2018, pp. 1232-1243.

27. B. S. Bloom, "Reflections on the development and use of the taxonomy," Yearbook: National

Society for the Study of Education, vol. 92, no. 2, pp. 1-8, 1994.

28. M. H. Imam, I. A. Tasadduq, A.-R. Ahmad, and F. Aldosari, "An Expert System for Assessment

of Learning Outcomes for ABET Accreditation," presented at the International Conference on Engineering

Education, Singapore, January 07-08, 2016.

29. D. Kroenke and D. J. Auer, Database Processing: Fundamentals, Design, and Implementation, 13th

ed. Prentice Hall, 2013.

30. K. Laudon and J. Laudon, Management Information Systems: International Edition, 13/E. Prentice

Hall, 2013.

31. A.-R. Ahmad, O. Basir, K. Hassanein, and S. Azam, "An intelligent expert systems' approach to

layout decision analysis and design under uncertainty," in Intelligent Decision Making: An AI-Based

Approach: Springer Berlin Heidelberg, 2008, pp. 321-364.

32. M. Negnevitsky, Artificial intelligence: a guide to intelligent systems. Canada: Pearson Education,

2011.

33. G. M. Marakas, Decision support systems in the 21st century. Prentice Hall Upper Saddle River,

NJ, 2003.

34. R. Sharda, D. Delen, and E. Turban, Analytics, data science, and artificial intelligence: Systems for

decision support. Reading: Pearson, 2020.

35. M. Ringenbach, "Collecting Student Data for Accreditation Assessment," Virginia Polytechnic

Institute and State University, 2010.

Page 20: Intelligent Decision Support in Automating ABET

36. W. Haga, G. Morris, and J. Morrell, "An Improved Database System for Program Assessment,"

Information Systems Education Journal, vol. 9, no. 7, p. 21, 2011.

37. J. Morrell, G. Morris, and W. Haga, "A system for tracking course assessment data for ABET

accreditation," in 2009 Northeast American Society of Engineering Education Conference, 2009.

38. M. Urban-Lurain et al., "An assessment database for supporting educational research," in Frontiers

in Education Conference, 2009. FIE'09. 39th IEEE, 2009, pp. 1-6: IEEE.

39. Smart-Accredit. (2021, February). Smart Accredit. Available: http://www.smart-accredit.com/

40. D. Akoumianakis, A. Savidis, and C. Stephanidis, "Encapsulating intelligent interactive behaviour

in unified user interface artefacts," Interacting with Computers, vol. 12, no. 4, pp. 383-408, 2000.

41. H. Youssef, S. M. Sait, and H. Ali, "Fuzzy simulated evolution algorithm for VLSI cell placement,"

Computers and industrial engineering, vol. 44, no. 2, pp. 227-247, 2003.

42. Walenstein, "Cognitive support in software engineering tools: A distributed cognition framework,"

PhD Dissertation, Computing Science, Simon Fraser University, 2002.

43. P. Jackson, Introduction to Expert System. Addison-Wesley, 1998.

44. M. Head and K. Hassanein, "Web site usability for eBusiness success: Dimensions, guidelines and

evaluation methods," in Proceedings of the Hawaii International Conference on Business, 2002.

45. D. C. L. Ngo, "An expert screen design and evaluation assistant that uses knowledge-based

backtracking," Information and Software Technology, vol. 45, no. 6, pp. 293-304, 2003.

46. V. Schnecke and O. Vornberger, "Hybrid genetic algorithms for constrained placement problems,"

IEEE Transactions on Evolutionary Computation, vol. 1, pp. 266-277, 1997.

47. R. Ahmad, "An intelligent expert system for decision analysis and support in multi-attribute layout

optimization," University of Waterloo, Canada, 2005.

48. O. Cordón, F. Herrera, F. Gomide, F. Hoffmann, and L. Magdalena, "Ten years of genetic fuzzy

systems: current framework and new trends," in Proceedings joint 9th IFSA world congress and 20th

NAFIPS international conference (Cat. No. 01TH8569), 2001, vol. 3, pp. 1241-1246: IEEE.

49. Khan, S.A.R., Ponce, P., Tanveer, M., Aguirre-Padilla, N., Mahmood, H., Shah, S.A.A.

Technological Innovation and Circular Economy Practices: Business Strategies to Mitigate the Effects of

COVID-19. Sustainability 2021, 13, 8479. https://doi.org/10.3390/su13158479

Page 21: Intelligent Decision Support in Automating ABET

50. Tanveer, M., Ahmad, A. R., Mahmood, H., and Haq, I. U. Role of Ethical Marketing in Driving

Consumer Brand Relationships and Brand Loyalty: A Sustainable Marketing Approach. Sustainability,

2021, 13(12), 6839. https://doi.org/10.3390/su13126839

51. Rasool, A., Shah, F.A. and Tanveer, M. (2021) Relational Dynamics between Customer

Engagement, Brand Experience, and Customer Loyalty: An Empirical Investigation, Journal of Internet

Commerce, 20:3, 273-292, DOI: 10.1080/15332861.2021.1889818

52. Hassan, S., Dhali, M., Zaman, F., and Tanveer, M. (2021). Big data and predictive analytics in

healthcare in Bangladesh: regulatory challenges. Heliyon, 7(6), e07179.

https://doi.org/10.1016/j.heliyon.2021.e07179

53. Haq, I.U., Hussain, A., and Tanveer, M. (2021). Evaluating the Scholarly Literature on Information

Literacy indexed in the Web of Science Database. Library Philosophy and Practice (e-journal). 5230.

https://digitalcommons.unl.edu/libphilprac/5230

54. Haq, I.U., Faridi, Rabiya Ali, and Tanveer, M. (2021). Evaluating the publications output of

Pakistan Journal of Information Management and Libraries based on the Scopus Database. Library

Philosophy and Practice (e-journal). 4923. https://digitalcommons.unl.edu/libphilprac/4923

55. Aldhebaib, A.M.,Haq, I.U., Fayazul Ul, Tanveer, M., and Singh, O.G., (2021). "Radiologic Clinics

of North America, Bibliometric Spectrum of Publications from 2000 to 2019." . Library Philosophy and

Practice (e-journal). 4783. https://digitalcommons.unl.edu/libphilprac/4783

56. Tanveer, M., Khan, N. and Ahmad, A.-R. (2021). AI Support Marketing: Understanding the

Customer Journey towards the Business Development," 2021 1st International Conference on Artificial

Intelligence and Data Analytics (CAIDA), pp. 144-150, 10.1109/CAIDA51941.2021.9425079.

57. Tanveer, M., Ali, H., and Haq, I.U. (2021). Educational Entrepreneurship Policy Challenges and

Recommendations for Pakistani Universities. Academy of Strategic Management Journal, 20(2).

https://www.abacademies.org/articles/educational-entrepreneurship-policy-challenges-and-

recommendations-for-pakistani-universities-10331.html

58. Tanveer, M. (2021). Analytical Approach on Small and Medium Pakistani Businesses Based on E-

Commerce Ethics with Effect on Customer Repurchase Objectives and Loyalty. Journal of Legal, Ethical

and Regulatory Issues,24(3). https://www.abacademies.org/articles/analytical-approach-on-small-and-

medium-pakistani-businesses-based-on-ecommerce-ethics-with-effect-on-customer-repurchase-objectiv-

10606.html

59. Bukhari, S.A., Tanveer, M., Mustafa, G., Zia-Ud-Den, N. (2021). Magnetic Field Stimulation

Effect on Germination and Antioxidant Activities of Presown Hybrid Seeds of Sunflower and Its Seedlings.

Journal of Food Quality, doi.org/10.1155/2021/5594183

60. Mahrosh, H.S., Tanveer, M., Arif, R., Mustafa, G. (2021). Computer-Aided Prediction and

Identification of Phytochemicals as Potential Drug Candidates against MERS-CoV, BioMed Research

International, doi.org/10.1155/2021/5578689

Page 22: Intelligent Decision Support in Automating ABET

61. Tanveer, M., Hassan, S., and Bhaumik, A. (2020). Academic Policy Regarding Sustainability and

Artificial Intelligence (AI). Sustainability, 12(22), 9435. https://doi.org/10.3390/su12229435

62. Haq, I.U. and Tanveer, M.. (2020)."Status of Research Productivity and Higher Education in the

Members of Organization of Islamic Cooperation (OIC)". Library Philosophy and Practice (ejournal). 3845.

https://digitalcommons.unl.edu/libphilprac/3845

63. Ahmad, K. and Mahmood, H., (2013). Macroeconomic Determinants of National Savings

Revisited: A Small Open Economy of Pakistan. World Applied Sciences Journal, 21(1), 49-57.

64. Mahmood, H., Alkhateeb, T.T.Y. and Furqan, M. (2020). Industrialization, urbanization and CO2

emissions in Saudi Arabia: Asymmetry analysis. Energy Reports, 6, 1553-1560.

https://doi.org/10.1016/j.egyr.2020.08.038

65. Mahmood, H., and Alkhateeb, T.T.Y. (2017). Trade and Environment Nexus in Saudi Arabia: An

Environmental Kuznets Curve Hypothesis. International Journal of Energy Economics and Policy, 7(5),

291-295.

66. Mahmood, H., and Alkhateeb, T.T.Y. (2018). Foreign Direct Investment, Domestic Investment and

Oil Price Nexus in Saudi Arabia. International Journal of Energy Economics and Policy, 8(4), 147-151.

67. Mahmood, H., Furqan, F. and Bagais, O.A. (2019). Environmental accounting of financial

development and foreign investment: spatial analyses of East Asia. Sustainability, 11(1), 0013.

https://doi.org/10.3390/su11010013

68. Mahmood, H., Alkhateeb, T.T.Y, Al-Qahtani, Zafrul Allam, M.M.Z., Ahmad, N. and Furqan, M.

(2020). Urbanization, Oil Price and Pollution in Saudi Arabia. International Journal of Energy Economics

and Policy, 10(2), 477-482.

69. Hassan, M.S., Tahir, M.N., Wajid, A., Mahmood, H., and Farooq, A. (2018). Natural Gas

Consumption and Economic Growth in Pakistan: Production Function Approach. Global Business Review,

19(2), 297-310.

70. Mahmood, H., Alrasheed, A.S. and Furqan, M. (2018). Financial market development and pollution

nexus in Saudi Arabia: Asymmetrical analysis. Energies, 11(12), 3462.

https://doi.org/10.3390/en11123462

71. Ahmad, N., Iqbal, A. and Mahmood, H., (2013). CO2 Emission, Population and Industrial Growth

linkages in Selected South Asian Countries: A Co-integration Analysis. World Applied Sciences Journal,

21(4), 615-622.

72. Murshed, M., Mahmood, H., Alkhateeb, T.T.Y, and Banerjee, S. (2020). Calibrating the Impacts

of Regional Trade Integration and Renewable Energy Transition on the Sustainability of International

Inbound Tourism Demand in South Asia. Sustainability, 12(20), 8341. https://doi.org/10.3390/su12208341

Page 23: Intelligent Decision Support in Automating ABET

73. Mahmood, H., and Chaudhary, A.R. (2012). A Contribution of Foreign Direct Investment in

Poverty Reduction in Pakistan. Middle-East Journal of Scientific Research, 12 (2), 243-248.

74. Mahmood, H., and Zamil, A.M.A. (2019). Oil price and slumps effects on personal consumption

in Saudi Arabia. International Journal of Energy Economics and Policy, 9(4), 12-15.

75. Alkhateeb, T.T.Y, Mahmood, H., and Sultan, Z.A. (2016). The Relationship between Exports and

Economic Growth in Saudi Arabia. Asian Social Science. 12(4), 117-124.

76. Siddiqui, A., Mahmood, H., and Margaritis, D. (2020). Oil Prices and Stock Markets during the

2014-16 Oil Price Slump: Asymmetries and Speed of Adjustment in GCC and Oil Importing Countries.

Emerging Markets Finance and Trade, 56(15), 3678-3708.

https://doi.org/10.1080/1540496X.2019.1570497

77. Senan, N.A.M., Mahmood, H., and Liaquat, S. (2018). Financial Markets and Electricity

Consumption Nexus in Saudi Arabia. International Journal of Energy Economics and Policy, 8(1), 12-16.

78. Maalel, N.F. and Mahmood, H., (2018). Oil-Abundance and Macroeconomic Performance in the

GCC Countries. International Journal of Energy Economics and Policy, 8(2), 182-187.

79. Xue, L., Haseeb, M., Mahmood, H., Alkhateeb, T.T.Y., and Murshed, M. (2021). Renewable

Energy Use and Ecological Footprints Mitigation: Evidence from Selected South Asian Economies.

Sustainability, 13(4), 1613. https://doi.org/10.3390/su13041613

80. Mahmood, H., Furqan, M., Alkhateeb, T.T.Y., and Fawaz, M.M. (2019). Testing the Environmental

Kuznets Curve in Egypt: Role of foreign investment and trade. International Journal of Energy Economics

and Policy, 9(2), 225-228.

81. Alkhateeb, T.T.Y and Mahmood, H., (2019). Energy Consumption and Trade Openness Nexus in

Egypt: Asymmetry Analysis. Energies, 12(10), 2018. https://doi.org/10.3390/en12102018

82. Hassan, M.U., Mahmood, H., and Hassan, M.S. (2013). Consequences of Worker’s Remittances on Human Capital: An In-Depth Investigation for a Case of Pakistan. Middle-East Journal of Scientific

Research, 14 (3), 443-452.

83. Mahmood, H., and Alkhateeb, T.T.Y (2018). Asymmetrical effects of real exchange rate on the

money demand in Saudi Arabia: A non-linear ARDL approach. PLoS ONE, 13(11), e0207598.

https://doi.org/10.1371/journal.pone.0207598