a novel clustering algorithm in a neutrosophic recommender ... fileintroductionbackgroundclustering...
TRANSCRIPT
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
A Novel Clustering Algorithm in aNeutrosophic Recommender System for
Medical Diagnosis
Le Hoang Son
Vietnam National University, Hanoi, Vietnam
23/03/2017
VNU - 23/03/2017 1 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Content
1. Introduction2. Background
2.1. Algebraic structures in neutrosophic recommender systems2.2. Neutrosophic similarity degree and neutrosophic similarity
matrix
3. A clustering algorithm for neutrosophicrecommender systems
4. Experimental result
5. Conclusion
VNU - 23/03/2017 2 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Introduction
VNU - 23/03/2017 3 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Medical Diagnosis
Medical diagnosis is a procedure for the investigation of a personssymptoms on the basis of disease.
Received the full attention of both the computer science and appliedcomputer mathematics research society.
Often contains a huge amount of uncertain, inconsistent,incomplete, and indeterminate data which are very difficult toretrieve [28].
VNU - 23/03/2017 4 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic Set and Neutrosophic Recommender System
Neutrosophic Set:
Proposed by Smarandache [27].Can handle the uncertain, incomplete and inconsistentinformation.
Neutrosophic Recommender System:
A recommender system base on neutrosophic set.Ability of solving problem which involves a large amount ofuncertain, inconsistent, incomplete and indeterminate datathat are notably difficult to retrieve, handle and process.
VNU - 23/03/2017 5 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Background
VNU - 23/03/2017 6 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Medical Diagnosis
Suppose that:
℘= {p1,p2, ...,pn},Γ = {s1,s2, ...sm},D = {d1,d2, ...,dk}
is the three lists of patients, symptoms and diseases, respectivelysuch that m,n,k ∈ N+ are the numbers of patients, symptoms anddiseases, respectively.
VNU - 23/03/2017 7 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Medical Diagnosis
Suppose that:
ℜ℘Γ = {ℜ℘Γ(pi ,sj) : ∀i = 1,2, ...,n; j = 1,2,3...m}
is the set of relations between patients and symptoms whereℜ℘Γ(pi ,sj) is the level of the patient pi who acquires the symptomsj . The value of ℜ℘Γ(pi ,sj) is either numeric number or aneutrosophic number which depends on the proposed domain ofthe problem.
VNU - 23/03/2017 8 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Medical Diagnosis
And:
ℜΓD = {ℜΓD(si ,dj) : ∀i = 1,2, ...,m; j = 1,2,3...k}
is the set which represents the relationship between the symptomsand the diseases where ℜΓD reveals the probability that symptomsi leads to dj the disease.We obtain:
VNU - 23/03/2017 9 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Medical Diagnosis
Definition 1
Medical diagnosis is the process of determining the relationshipbetween the patients and the diseases described asℜ℘D = {ℜ℘D(pi ,dj) : ∀i = 1,2, ...,n; j = 1,2,3...k} where the valueof ℜ℘D(pi ,dj) is either 0 or 1 which indicates that the patient piacquired the disease dj or not. Mathematically, the problem ofmedical diagnosis is an implication operator given by the mapping{ℜ℘Γ,ℜΓD}→ℜ℘D .
VNU - 23/03/2017 10 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic Set and Simplified Neutrosophic Set
Definition 2
Neutrosophic Set: Let X be a non-empty set and x ∈ X . Aneutrosophic set A in X is characterized by a truth membershipfunction TA, an indeterminacy membership function IA, and afalsehood membership function FA. TA(x), IA(x) and FA(x) arereal standard or non-standard subsets of ]−0,1+[ such that TA, IA,FA → ]−0,1+[.
VNU - 23/03/2017 11 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic Set and Simplified Neutrosophic Set
There is no restriction on the sum of TA(x), IA(x) and FA(x),so, ]−0 < TA(x) + IA(x) +FA(x) < 3+[.
From a philosophical point view, the neutrosophic set takesthe value from real standard or non-standard subsets of]−0,1+[. Thus, it is necessary to take the interval [0,1] insteadof ]−0, 1+[ in technical applications because it is difficult touse ]−0,1+[ in real life applications such as engineering andscientific problems.
VNU - 23/03/2017 12 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic Set and Simplified Neutrosophic Set
If the functions TA(x), IA(x) and FA(x) are singletonsubinterval/subsets of the real standard such that withTA(x) : X → [0,1], IA(x) : X → [0,1],FA(x) : X → [0,1]. Asimplification of the neutrosophic set A is denoted by:
(1) A = {(x ,TA(x), IA(x),FA(x)) : x ∈ X}.
with 0 < TA(x) + IA(x) +FA(x) < 3 and is a subclass of theneutrosophic set known as simplified neutrosophic set. Asimplified neutrosophic set contains the concept of the intervalneutrosophic set and the single valued neutrosophic set.
VNU - 23/03/2017 13 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophication
Definition 3
The main purpose of neutrosophication is to map the inputvariables into neutrosophic input sets. If x is a crisp input, then
where x ∈ X and aj ≤ x ≤ ak for truth membership, bj ≤ x ≤ bk forindeterminacy membership, cj ≤ x ≤ ck for falsehood membership,respectively, and j ,k = 1,2,3,4.
VNU - 23/03/2017 14 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Deneutrosophication
Definition 4
This step is similar to defuzzification in [10] and involves thefollowing two stages:Stage 1: SynthesizationIn this stage, we transform a neutrosophic set Hk into a fuzzy setB by using the following function
(2) f = (THk (y), IHk (y),FHk (y)) : [0,1]× [0,1]× [0,1]→ [0,1]
where f is defined by:
(3) TB(y) = ε1 ∗THk (y) + ε2 ∗IHk (y)
2+ ε3 ∗
FHk (y)
4
and where 0≤ ε1,ε2,ε3 ≤ 1 such that ε1 + ε2 + ε3 = 1.
VNU - 23/03/2017 15 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Deneutrosophication
Definition
Stage 2: Typical neutrosophic valueIn this stage, we calculate a typical deneutrosophicated valueden(TB(y)) using the centroid or center of gravity method, whichis given below:
(4) den(TB(y)) =
∫ ba TB(y)ydy∫ ba TB(y)dy
VNU - 23/03/2017 16 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Single-criterion neutrosophic recommender system (SC-NRS)
Definition 5
The SC-NRS is a utility function ℜ, which is a mapping defined on(X ,Y ) and as follows
where TiX (x), IiX (x),FiX (x) are the truth membership function,indeterminate membership function and false membership functionof the patient with the linguistic label i th of the feature X suchthat i = 1,2, ...,s and TiX (x), IiX (x),FiX (x) ∈ [0,1] .
VNU - 23/03/2017 17 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Single-criterion neutrosophic recommender system (SC-NRS)
Similarly,
TjY (y), IjY (y),FjY (y)
is the truth membership function, indeterminate membershipfunction and false membership function of the symptom with thelinguistic label j th of the feature Y where
i = 1,2, ...,s and TjY (y), IjY (y),FjY (y) ∈ [0,1].
VNU - 23/03/2017 18 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Single-criterion neutrosophic recommender system (SC-NRS)
And,
TlD(d), IlD(d),FlD(d)
is the truth membership function, indeterminate membershipfunction and false membership function of the disease D with thelinguistic label l th where
l = 1,2, ...,s and TlD(d), IlD(d),FlD(d) ∈ [0,1].
VNU - 23/03/2017 19 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Multi-criteria neutrosophic recommender system (MC-NRS)
Definition 6
The MC-NRS is a utility function ℜ which is a mapping defined on(X ,Y ) and as follows
where T , I ,F are defined similarly as in SC-NRS.
VNU - 23/03/2017 20 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
Proposition 1
The structure (F (NRS),∩,∪,NRSX×Y ,NRS /0) forms a completelattice.
Proof:
1) From commutative and associative properties [3] we obtainNRS1∩NRS2 = NRS12 ∈ F (NRS),NRS1∪NRS2 = NRS12 ∈ F (NRS).
2) From commutative and associative properties [3] we obtainNRS1∩NRS1 = NRS1 and NRS1∪NRS1 = NRS1.
VNU - 23/03/2017 21 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
3) From the commutative and associative properties [3] we notethat
NRS1∩NRS2 = NRS2∩NRS1,NRS1∪NRS2 = NRS2∪NRS1,
NRS1∩ (NRS2∩NRS3) = (NRS1∩NRS2)∩NRS3,NRS1∪ (NRS2∪NRS3) = (NRS1∪NRS2)∪NRS3.
4) From the definition in [3] we obtainNRS1∩ (NRS2∪NRS1) = NRS1,NRS1∪ (NRS2∩NRS1) = NRS1.
VNU - 23/03/2017 22 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
Thus from 1) to 4), we observe that the structure(F (NRS),∩,∪,NRSX×Y ,NRS /0) forms a lattice.Consider a collection of neutrosophic recommender systems{NRSi : i ∈ N} over F(NRS). We state,
∞⋂i=1
Xi ⊆ X ,∞⋂i=1
Yi ⊆ YwithXi ⊆ X ,Yi ⊆ Y
and
D∞l = {R∞
l ;T∞l ;F∞
l ; I∞l }= {R∞
lq;T∞lq ;F∞
lq ; I∞lq |q = 1,2, ...r ; l ∈N;k ∈N}
T∞lq = max{T 1
lq;T 2lq; ...};F∞
lq = min{F 1lq;F 2
lq; ...}; I∞lq = min{I 1
lq; I 2lq; ...}.
VNU - 23/03/2017 23 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
This implies∞⋃i=1
NRSi ⊆ F (NRS)
Again, we obtain∞⋂i=1
NRSi ⊆ F (NRS)
Thus we have proven that F(NRS) is a complete lattice. �
VNU - 23/03/2017 24 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
Proposition 2
The structure (F (NRS),∪,∩) is bounded distributive lattice.
Proof:From condition 4) in the above Proposition 1), we obtain
NRS1∩ (NRS2∪NRS3) = (NRS1∩NRS2)∪ (NRS1∩NRS3)
and
NRS1∪ (NRS2∩NRS3) = (NRS1∪NRS2)∩ (NRS1∪NRS3)
for all NRS1; NRS2; NRS3∈ F(NRS).This completes the proof. �
VNU - 23/03/2017 25 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
Proposition 3
De Morgan LawsLet NRS1andNRS2 ⊆ F (NRS). Thus, the following conditionshold:
1) (NRS1∪NRS2)c = NRSc1 ∩NRSc
2 ,
2) (NRS1∩NRS2)c = NRSc1 ∪NRSc
2 .
VNU - 23/03/2017 26 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
Proof: We only prove 1).
1) Because we obtain
(NRS1∪NRS2)c = NRSc12,
NRSc12 = {X c
12;Y c12;{D12c
l }|l = 1,2, ...,k},X c
12 = (X1∪X2)c = X c1 ∩X c
2 ,
Y c12 = Y c
1 ∩Y c2 ,
{D12cl }= (R12c
l ;T 12cl ;F 12c
l ; I 12cl ) =
{(R12clq ;T 12c
lq ;F 12clq ; I 12c
lq )|q = 1,2, ..., r ; l ∈ N;k ∈ N},T 12clq = F 12c
lq = min{F 1lq;F 2
lq};F 12clq = I 12c
lq = min{I 1lq; I 2
lq};I 12clq = T 12c
lq = max{T 1lq;T 2
lq}.
From the definition of NRSc1 ∩NRSc
2 the proposition is proved.
2) Provable on the same lines. �VNU - 23/03/2017 27 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
Proposition 4
(F (NRS),∪,∩) forms a De Morgan algebra.
Proof: The proof follows from Proposition 2 and 3. �
VNU - 23/03/2017 28 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
Proposition 5
(F (NRS),∪,∩,c ) forms a Boolean Algebra
Proof: From proposition 2 and proposition 3, we note that(F (NRS),∪,∩,c ) is a bounded distributive lattice andNRS1 ∈ F (NRS) with its complement NRSc
1 ∈ F (NRS) whichcompletes the proof. �
VNU - 23/03/2017 29 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
Proposition 6
(F (NRS),∪,∩,c ,NRS /0) forms Kleen algebra.
Proof: From Proposition 4, (F (NRS),∪,∩,c ,NRS /0) forms DeMorgan algebra. Moreover
NRS1∩NRSc1 = NRS /0 ⊆ NRS2∪NRSc
2
with NRS1,NRS2 ∈ F (NRS). By the definition(F (NRS),∪,∩,c ,NRS /0) is a Kleen Algebra. �
VNU - 23/03/2017 30 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
Proposition 7
(F (NRS),∩c ,NRSX×Y ) is a MV algebra.
Proof: To prove (F (NRS),∩c ,NRSX×Y ) is a MV algebra, wemust prove the following 4 conditions:
1) MV1: (F (NRS),∩) is a commutative monoid. This proof isstraightforward.
2) MV2: With every NRS1 ∈ F (NRS), we have (NRSc1 )c = NRS1
3) MV3: With NRS1andNRS2 we have(NRSX×Y )c ∩NRS1 = NRS /0 = (NRSX×Y )c
VNU - 23/03/2017 31 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
4) MV4: Because,
(NRSc1 ∩NRS2)c ∩NRS3 = ((NRSc
1 )c ∪NRSc2 )∩NRS2
= (NRS1∪NRSc2 )∩NRS2
= (NRS1∩NRS2)∪ (NRSc2 ∩NRS2)
= (NRS1∩NRS2)∪NRS /0
= (NRS2∩NRS1)∪ (NRSc1 ∩NRS1)
= (NRS2∩NRSc1 )∪NRS1
for all NRS1,NRS2 ∈ F (NRS). Thus (F (NRS),∩c ,NRSX×Y ) isa MV algebra. �
VNU - 23/03/2017 32 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
Proposition 8
(F (NRS),∪c ,NRS /0) also forms a MV algebra.
Proof: MV1, MV2, MV3 are straightfoward. We prove MV4:Because,
(NRSc1 ∪NRS2)c ∪NRS3 = ((NRSc
1 )c ∩NRSc2 )∪NRS2
= (NRS1∪NRS2)∩ (NRSc2 ∪NRS2)
= (NRS1∪NRS2)∩ (NRSX×Y )
= (NRS1∪NRS2)∩ (NRSc1 ∪NRS1)
= (NRS1∪NRS2)∩ (NRS1∪NRSc1 )
= (NRS2∩NRSc1 )∪NRS1
= (NRSc2 ∪NRS1)c ∪NRS1
For all NRS1,NRS2 ∈ F (NRS). Thus (F (NRS),∪c ,NRS /0) is a MValgebra. �
VNU - 23/03/2017 33 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
Proposition 9
(F (NRS), |–|,NRS /0) is a bounded BCK algebra.
Proof: For any NRS1,NRS2,NRS3 ∈ F (NRS),
1) BCI-1:((NRS1|–|NRS2)|–|(NRS1|–|NRS3)|–|(NRS2|–|NRS3)) = NRS /0
2) BCI-2:(NRS1|–|(NRS1|–|NRS2))|–|NRS2 = NRS /0
3) BCI-3:NRS1|–|NRS2 = NRS /0
VNU - 23/03/2017 34 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
4) BCI-4:LetNRS1|–|NRS2 = NRS /0;NRS2|–|NRS1 = NRS /0and this implies that NRS1 = NRS2
5) BCI-5:NRS /0|–|NRS2 = NRS /0Thus (F (NRS), |–|,NRS /0) is a BCK algebra. AdditionallyNRSX×Y is such that:NRS1|–|NRSX×Y = NRS /0 for all NRS1 ∈ F (NRS).
Therefore (F (NRS), |–|,NRS /0) is a bounded BCK algebra. �
VNU - 23/03/2017 35 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
Definition 7
Let (F (NRS),∪,∩,c ) be a bounded lattice and NRS1 ∈ F (NRS).An element NRSc
1 is known as a pseudo-complement of NRS1 , ifNRS1∩NRSc
1 = NRS /0 and NRS2 ⊆ NRSc1 whenever
NRS1∩NRS2 = NRS /0. If every element of a lattice F(NRS) is apseudo-complement, then F(NRS) is said to bepseudo-complemented. The equation NRSc
1 ∪NRS = NRSX×Y isknown as the Stones identity.
VNU - 23/03/2017 36 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
Definition 8
A Stone algebra is a pseudo-complemented, distributive latticesatisfying the Stones identity.
VNU - 23/03/2017 37 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
Lemma 1
Let NRS1,NRS2 ∈ F (NRS). Thus, the pseudo-complement ofNRS1 relative to NRS2 exists in F(NRS).
Lemma 2
Let NRS1,NRS2 ∈ F (NRS). Thus, the pseudo-complement ofNRS2 relative to NRS1 exists in F(NRS).
VNU - 23/03/2017 38 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Algebraic structures in neutrosophic recommender systems
Proposition 10
(F (NRS),∪,∩,c ) forms Brouwerian lattices.
Proof: The proof follows from Lemma 1 and Lemma 2. �
VNU - 23/03/2017 39 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Definition 9
Let F(NRS) denotes the family of all neutrosophic recommendersystems. We define a mapping Θ : F (NRS)×F (NRS)→ F (NRS),where NRSj ∈ F (NRS) for all j = 1,2, .... Thus, Θ is is referred toas the neutrosophic recommender similarity degree (NRSD) ofNRS1 and NRS2 if Θ satisfies the following conditions.
1 Θ(NRS1,NRS2) is a neutrosophic value (NV).
2 Θ(NRS1,NRS2) = (1,0,0) if NRS1 = NRS2.
3 Θ(NRS1,NRS2) = Θ(NRS2,NRS1).
VNU - 23/03/2017 40 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
We propose a formula to compute the NRSD of NRS1 and NRS2
(7)
Θ(NRS1,NRS2) = 1−
n
∑j=1
wj
β1|TNRS1(xj)−TNRS2(xj)|γ
+β2|INRS1(xj)− INRS2(xj)|γ
+β3|FNRS1(xj)−FNRS2(xj)|γ
1
γ
where β1, β2 and β3 are coefficients of the truth, interdeterminacyand falsehood memberships of NRS respectively and are normallyset as 1 by default. In addition,
γ ≥ 1,w = (w1,w2, ...,wn)t ,wj ∈ [0,1] with the conditionn
∑j=1
wj = 1
for all j = 1,2,3, ...,n and xj ∈ X .
VNU - 23/03/2017 41 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
The above Eq.(7) has the ability to weight the deviation of NRSjfor each j = 1,2,3, ...,n as well as the deviation of the respectivetruth membership function, indeterminate membership functionand falsehood membership function which are highly flexible in thisaspect. However, if we conside Θ as the function of w, then itbecomes a bounded function. For instance
(8) ∂ (w) =
n
∑j=1
wj
β1|TNRS1(xj)−TNRS2(xj)|γ
+β2|INRS1(xj)− INRS2(xj)|γ
+β3|FNRS1(xj)−FNRS2(xj)|γ
where γ ≥ 1.
VNU - 23/03/2017 42 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Solving Eq. (8) for the maximum and minimum problems of Eq.(7), we obtain
(9) ∂ (w) =
n
∑j=1
wj
β1|TNRS1(xj)−TNRS2(xj)|γ
+β2|INRS1(xj)− INRS2(xj)|γ
+β3|FNRS1(xj)−FNRS2(xj)|γ
≤maxj
β1|TNRS1(xj)−TNRS2(xj)|γ
+β2|INRS1(xj)− INRS2(xj)|γ
+β3|FNRS1(xj)−FNRS2(xj)|γ
VNU - 23/03/2017 43 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
A positive integer k always exists such that
(10) maxj
β1|TNRS1(xj)−TNRS2(xj)|γ
+β2|INRS1(xj)− INRS2(xj)|γ
+β3|FNRS1(xj)−FNRS2(xj)|γ
=
β1|TNRS1(xk)−TNRS2(xk)|γ
+β2|INRS1(xk)− INRS2(xk)|γ
+β3|FNRS1(xk)−FNRS2(xk)|γ
VNU - 23/03/2017 44 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Thus the equality holds only when wk = 1 and wj = 0 for j 6= k.Again by the definition of boundedness, we obtain
(11) ∂ (w) =
n
∑j=1
wj
β1|TNRS1(xj)−TNRS2(xj)|γ
+β2|INRS1(xj)− INRS2(xj)|γ
+β3|FNRS1(xj)−FNRS2(xj)|γ
≥minj
β1|TNRS1(xj)−TNRS2(xj)|γ
+β2|INRS1(xj)− INRS2(xj)|γ
+β3|FNRS1(xj)−FNRS2(xj)|γ
VNU - 23/03/2017 45 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Again, a positive integer s exists such that
(12) minj
β1|TNRS1(xj)−TNRS2(xj)|γ
+β2|INRS1(xj)− INRS2(xj)|γ
+β3|FNRS1(xj)−FNRS2(xj)|γ
=
β1|TNRS1(xs)−TNRS2(xs)|γ
+β2|INRS1(xs)− INRS2(xs)|γ
+β3|FNRS1(xs)−FNRS2(xs)|γ
The equality holds only when ws = 1 and wj = 0 for j 6= s.
VNU - 23/03/2017 46 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
We denote¯∂ (NRS1,NRS2) as the lower bound and ∂̄ (NRS1,NRS2)
as the upper bound where
(13)¯∂ (NRS1,NRS2) = min
j
β1|TNRS1(xj)−TNRS2(xj)|γ
+β2|INRS1(xj)− INRS2(xj)|γ
+β3|FNRS1(xj)−FNRS2(xj)|γ
VNU - 23/03/2017 47 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
And
(14) ∂̄ (NRS1,NRS2) = maxj
β1|TNRS1(xj)−TNRS2(xj)|γ
+β2|INRS1(xj)− INRS2(xj)|γ
+β3|FNRS1(xj)−FNRS2(xj)|γ
This statement implies that
1− γ√
¯∂ (NRS1,NRS2)≤Θ(NRS1,NRS2)≤ 1− γ
√∂̄ (NRS1,NRS2)
VNU - 23/03/2017 48 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Definition 10
Let NRS1 and NRS2 be two neutrosophic recommender systemsNRSs. Thus Θ(NRS1,NRS2) is said to be the neutrosophicrecommender similarity degree between NRS1 and NRS2 where
(15) Θ(NRS1,NRS2) =
(1− γ
√∂̄ (NRS1,NRS2),
γ√
¯∂ (NRS1,NRS2),
γ√
¯∂ (NRS1,NRS2)),
where γ ≥ 1 is an exponential coefficient that is set to 1 by default.
VNU - 23/03/2017 49 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Theorem 1
The similarity degree defined in Eq. (15) satisfies the condition ofneutrosophic recommender similarity degree.
Proof:1 We must prove that Θ(NRS1,NRS2) is a neutrosophic value.
For this purpose, we know that:
0≤ β1|TNRS1(xj)−TNRS2(xj)|γ + β2|INRS1(xj)− INRS2(xj)|γ
+ β3|FNRS1(xj)−FNRS2(xj)|γ
≤ (β1 + β2 + β3)max(|TNRS1(xj)−TNRS2(xj)|γ
+ |INRS1(xj)− INRS2(xj)|γ + |FNRS1(xj)−FNRS2(xj)|γ )
= max(|TNRS1(xj)−TNRS2(xj)|γ + |INRS1(xj)− INRS2(xj)|γ
+ |FNRS1(xj)−FNRS2(xj)|γ )
≤ 1
VNU - 23/03/2017 50 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
which implies that
(16)
0≤ 1− γ√
¯∂ (NRS1,NRS2)≤ 1and0≤ 1− γ
√∂̄ (NRS1,NRS2)≤ 1
Because γ√
¯∂ (NRS1,NRS2), γ
√∂̄ (NRS1,NRS2) are the respective
lower and upper bounds, therefore
(17) 0≤ 1− γ√
¯∂ (NRS1,NRS2)≤ 1
(18) 0≤ 1− γ
√∂̄ (NRS1,NRS2)≤ 1
VNU - 23/03/2017 51 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
which means
(19) 0≤ 1− (γ
√∂̄ (NRS1,NRS2)−2 γ
√¯∂ (NRS1,NRS2))≤ 3
Hence Θ(NRS1,NRS2) is a neutrosophic value.
2) Comparing both sides, we obtain
(20) 1− γ
√∂̄ (NRS1,NRS2) = 1, γ
√¯∂ (NRS1,NRS2) = 0
And we state(21)
1− γ√
¯∂ (NRS1,NRS2)≤Θ(NRS1,NRS2)≤ 1− γ
√∂̄ (NRS1,NRS2)
This gives us Θ(NRS1,NRS2) = 1 when NRS1 = NRS2.
3) This is obvious. �
VNU - 23/03/2017 52 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
We associate a matrix to represent this similarity degree. For thispurpose, we define the following.
Definition 11
Let M =
xjksjkdjk
3n×n
be a 3n×n matrix. Thus, M is known as a
neutrosophic recommender matrix (NRM) if all of its entitiesxjk ,sjk ,djk(j ,k = 1,2, ...,n) are neutrosophic values.
VNU - 23/03/2017 53 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Definition 12
Let M1 =
x(1)jk
s(1)jk
d(1)jk
3n×n
and M2 =
x(2)jk
s(2)jk
d(2)jk
3n×n
be two NRM matrices.
Thus M = M1 •M2 is referred to as the composition of M1 and M2,
where
(22)
xjk =n∨
l=1
(x(1)jl ∧x
(2)lk ) =
maxj{min(T
x(1)jl
,Tx
(2)lk
)},minj{max(I
x(1)jl
, Ix
(2)lk
)},
minj{max(F
x(1)jl
,Fx
(2)lk
)}
VNU - 23/03/2017 54 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
And:
sjk =n∨
l=1
(s(1)jl ∧s
(2)lk ) =
maxj{min(T
s(1)jl
,Ts
(2)lk
)},minj{max(I
s(1)jl
, Is
(2)lk
)},
minj{max(F
s(1)jl
,Fs
(2)lk
)}
djk =n∨
l=1
(d(1)jl ∧d
(2)lk ) =
maxj{min(T
d(1)jl
,Td
(2)lk
)},minj{max(I
d(1)jl
, Id
(2)lk
)},
minj{max(F
d(1)jl
,Fd
(2)lk
)}
where j ,k = 1,2, ...,n.
VNU - 23/03/2017 55 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Theorem 2
The composition matrix M = M1 •M2 is also a neutrosophicrecommender matrix.
Proof:This can be easily proved by Definition (12). �
VNU - 23/03/2017 56 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Definition 13
The neutrosophic recommender matrix M is known as aneutrosophic recommender similarity matrix (NRSM) if M fulfillsthe following criteria:
1 Reflexive: xjj = sjj = djj = (1,0,0) for j = 1,2, ...,n. .
2 Symetric: xjk = xkj ,sjk =s kj and djk = dkj which means thatthey are component-wise symmetric, i.e.,
Txjk = Txkj , Ixjk = Ixkj ,Fxjk = Fxkj
Tsjk = Tskj , Isjk = Iskj ,Fsjk = Fskj
Tdjk = Tdkj , Idjk = Idkj ,Fdjk = Fdkj
for all j ,k = 1,2, ...,n.
VNU - 23/03/2017 57 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Corollary 1
The composition of two NRSM matrices might not be an NRSM.
Proof:Straightforward. �
VNU - 23/03/2017 58 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Theorem 3
Let M1 be an NRSM matrix, and thus M = M1 •M1 is also anNRSM matrix.
Proof:Straightforward. �
VNU - 23/03/2017 59 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Theorem 4
Let M1, M2 and M3 be three NRSM matrices. Thus M1, M2 andM3 satisfies the associative law, i.e.,
(M1 •M2)•M3 = M1 • (M2 •M3).
Proof:
Let(M1 •M2)•M3 =
xjtsjtdjt
3n×n
andM1 • (M2 •M3) =
x ′jts ′jtd ′jt
3n×n
.
VNU - 23/03/2017 60 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
By Definition (12), we know that
xjt =n∨
k=1
{(n∨
l=1
(x(1)jl ∧x
(2)lk ))∧x (3)
kt }
=n∨
k=1
n∨l=1
(x(1)jl ∧ (x
(2)lk ∧x
(3)kt ))
=n∨
l=1
{x (1)jl ∧ (
n∨k=1
(x(2)lk ∧x
(3)kt ))}
= x ′jt .
For j , t = 1,2, ...n . Similarly, we can prove this for s ′jt and d ′jtwhich completes the proof. �
VNU - 23/03/2017 61 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Corollary 2
Let M be an NRSM and let k1, k2 be any positive integers.Therefore,
Mk1+k2 = Mk1 •Mk2
where Mk1 and Mk2 are respectively the k1 and k2 compositions ofM and Mk1 , Mk2 and Mk1+k2 are NRSM.
Proof: Straightforward. �
VNU - 23/03/2017 62 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Definition 14
An NRSM M is known as a neutrosophic recommender equivalencematrix (NREM), if it satisfies the following assertions:
1 M is reflexive: xjj = sjj = djj = (1,0,0) for j = 1,2, ...,n.
2 M is symmetric: xjk = xkj ,djk = dkj ,sjk = skj forj ,k = 1,2, ...,n.
3 M is transitive: M2 ⊆M
VNU - 23/03/2017 63 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Theorem 5
Let M be an NRSM and let M →M2→M4→ ...→M2k → be itscompositions. After a finite time of compositions, there must exista positive integer k such that M2k = M2(k+1)
with M2k is an NREM.
Proof:This can be proved in a straightforward manner. �
VNU - 23/03/2017 64 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Definition 15
Let M =
xjksjkdjk
3n×n
be an NRSM, where xjk = (Txjk , Ixjk ,Fxjk ),
sjk = (Tsjk , Isjk ,Fsjk ) and djk = (Tdjk , Idjk ,Fdjk ) for j ,k = 1,2, ...,n.Thus, Xλ = (λxjk)n×n, Sλ = (λ sjk)n×n, Dλ = (λdjk)n×n is knownas the λ -cutting matrix of X, S and D, where
VNU - 23/03/2017 65 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Definition 16
The matrix M∗ is known as a neutrosophic equivalence matrix(NEM) if it satisfies the following conditions
1 Reflexive: x∗jk = s∗jk = d∗jk = 1 for j = 1,2, ...,n. and for anyx∗jk ,s
∗jk ,d
∗jk ∈ [0,1], j ,k = 1,2, ...,n.
2 Symmetric: x∗jk = x∗kj ,d∗jk = d∗kj ,s
∗jk = s∗kj for j ,k = 1,2, ...,n.
3 Transitive: maxl
(min(x∗jl ,x∗lk))≤ x∗jk ,max
l(min(s∗jl ,s
∗lk))≤
s∗jk ,maxl
(min(d∗jl ,d∗lk))≤ d∗jk for j ,k = 1,2, ...,n.
VNU - 23/03/2017 66 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Theorem 6
M =
xjksjkdjk
3n×n
is an NREM if its λ -cutting matrix
M∗ =
x∗jks∗jkd∗jk
3n×n
is an NEM, where x∗jk = (Tx∗jk, Ix∗jk ,Fx∗jk ),
s∗jk = (Ts∗jk, Is∗jk ,Fs∗jk ) and d∗jk = (Td∗jk
, Id∗jk ,Fd∗jk ) for j ,k = 1,2, ...,n..
Proof:Straightforward. �
VNU - 23/03/2017 67 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Definition 17
Let NRSj(j = 1,2, ...n) be a collection of neutrosophic
recommender systems, M =
xjksjkdjk
3n×n
be an NRSM,
M∗ =
x∗jks∗jkd∗jk
3n×n
be a NREM of M, and M∗λ
=
λx∗jkλ s∗jkλd∗jk
3n×n
be the
λ -cutting matrix of M∗. If the corresponding entries in both thej th line (column) and kth line (column) of M∗
λare equal, then
NRSj and NRSk are said to be of one type.
VNU - 23/03/2017 68 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Neutrosophic similarity degree and neutrosophic similarity matrix
Corollary 3
If NRSj and NRSk are of the same type, and NRSk and NRSl areof the same type, then NRSj and NRSl are of the same type.
Based on above theory, we develop a clustering algorithm forneutrosophic recommender systems in the next section.
VNU - 23/03/2017 69 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
A clustering algorithm for NRS
VNU - 23/03/2017 70 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
A clustering algorithm for neutrosophic recommender systems
Suppose that:X = {x1,x2, ...,xn} is a finite set of alternatives (features),S = {s1,s2, ...,sn} is a set of attributes (symptoms) andD = {d1,d2, ...,dn} is a set of diseases in a multi-attributediagnostic decision-making problem.This section presents a new clustering algorithm for NRSs based onthe similarity matrices in the previous section.
VNU - 23/03/2017 71 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
A clustering algorithm for neutrosophic recommender systems
1) Step 1: Using the neutrosophication process in Def.(2), wetransform X = {x1,x2, ...,xn}, S = {s1,s2, ...,sn} andD = {d1,d2, ...,dn} into NSs and NRSs.
(24)
X = {(xj ,TX (xj), IX (xj),FX (xj))},S = {(sk ,TS(sk), IS(sk),FS(sk))},D = {(dl ,TD(dl), ID(dl),FD(dl))}.
where j ,k, l = 1,2, ...,p. In Eq. (24), TX (xj) represents thetruth degree of xj in X and IX (xj) represents theindeterminate degree of xj , and FX (xj) indicates the falsitydegree of xj in X. Similarly for S and D.
VNU - 23/03/2017 72 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
A clustering algorithm for neutrosophic recommender systems
2) Step 2: From the neutrosophic recommender similaritydegree in Eq. (15), we construct the neutrosophicrecommender similarity matrix
(25) M =
[xjk ]T
[sjk ]T
[djk ]T
3n×n
VNU - 23/03/2017 73 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
A clustering algorithm for neutrosophic recommender systems
where
xjk = Θ(xj ,xk) = [(1− ( γ
√∂̄ (xj ,xk))
TX,( γ
√¯∂ (xj ,xk))
IX,( γ
√¯∂ (xj ,xk))
FX
]
(26)
sjk = Θ(sj ,sk) = [(1− ( γ
√∂̄ (sj ,sk))
TS,( γ
√¯∂ (sj ,sk))
IS,( γ
√¯∂ (sj ,sk))
FS
]
(27)
djk = Θ(dj ,dk) = [(1− ( γ
√∂̄ (dj ,dk))
TD,( γ
√¯∂ (dj ,dk))
ID,( γ
√¯∂ (dj ,dk))
FD
]
(28)
where j ,k = 1,2, ...,n.
VNU - 23/03/2017 74 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
A clustering algorithm for neutrosophic recommender systems
In Eqs. (26), (27) and (28), we obtain
(29) γ
√¯∂ (xj ,xk) = min
l[β1|TXl
(xj)−TXl(xk)|γ + β2|IXl
(xj)− IXl(xk)|γ
+ β3|FXl(xj)−FXl
(xk)|γ ]
(30) γ
√∂̄ (xj ,xk) = max
l[β1|TXl
(xj)−TXl(xk)|γ + β2|IXl
(xj)− IXl(xk)|γ
+ β3|FXl(xj)−FXl
(xk)|γ ]
(31) γ
√¯∂ (sj ,sk) = min
l[β1|TSl (sj)−TSl (sk)|γ + β2|ISl (sj)− ISl (sk)|γ
+ β3|FSl (sj)−FSl (sk)|γ ]
VNU - 23/03/2017 75 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
A clustering algorithm for neutrosophic recommender systems
And:
(32) γ
√∂̄ (sj ,sk) = max
l[β1|TSl (sj)−TSl (sk)|γ + β2|ISl (sj)− ISl (sk)|γ
+ β3|FSl (sj)−FSl (sk)|γ ]
(33) γ
√¯∂ (dj ,dk) = min
l[β1|TDl
(dj)−TDl(dk)|γ + β2|IDl
(dj)− IDl(dk)|γ
+ β3|FDl(dj)−FDl
(dk)|γ ]
(34) γ
√¯∂ (dj ,dk) = max
l[β1|TDl
(dj)−TDl(dk)|γ ,β2|IDl
(dj)− IDl(dk)|γ
+ β3|FDl(dj)−FDl
(dk)|γ ]
where γ,β1,β2 and β3 are predefined parameters stated in Defs.(9-10) respectively.
VNU - 23/03/2017 76 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
A clustering algorithm for neutrosophic recommender systems
3) Step 3: In this step, we must check whether M is aneutrosophic recommender equivalence matrix i.e. M2 ⊆M.If M is not, we must perform the compositionsM →M2→M4→ ...→M2k → ... until M2k = M2(k+1)
. Thus,M2k is the derived neutrosophic recommender equivalencematrix. Without loss of any generality, we denote it as
(35) M∗ =
x∗jks∗jkd∗jk
3n×n
where x∗jk = (Tx∗jk, Ix∗jk ,Fx∗jk ), s∗jk = (Ts∗jk
, Is∗jk ,Fs∗jk ) and
d∗jk = (Td∗jk, Id∗jk ,Fd∗jk ) for j ,k = 1,2, ...,n..
VNU - 23/03/2017 77 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
A clustering algorithm for neutrosophic recommender systems
4) Step 4: For a given confidence level λ , we must calculate theλ -cutting matrix using Eq. (??)
(36) M∗λ
=
λx∗jkλ s∗jkλd∗jk
3n×n
5) Step 5: According to the λ -cutting matrix anddeneutrosophication process defined from Def. (10), the givenfinite set of alternatives (features) X = {x1,x2, ...,xn}, thegiven set of attributes (symptoms) S = {s1,s2, ...,sn} and thegiven set of diseases D = {d1,d2, ...,dn} have been clustered.
VNU - 23/03/2017 78 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Clustering model of neutrosophic recommender system
Figure 1: Clustering model of neutrosophic recommender system.
VNU - 23/03/2017 79 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Experimental result
VNU - 23/03/2017 80 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Experimental environment
Experimental Tools: The proposed algorithm wasimplemented in addition to the methods of Guo [12], Sahin[24], Ye 2014 [40] and Ye 2016 [43]. The algorithms were runin the Matlab 2015a programming language on a PC withIntel(R) Core (TM) i3 CPU [email protected] GHz, 4096MB RAMand the operating system was Windows 7 Professional 64 bits.
Datasets: Four benchmark datasets (RHC, diabetes, breastcancer and DMD) were taken from [6].
VNU - 23/03/2017 81 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Parameter settings:
Neutrosophication process: The values aj , bj and cj withj = 1,2,3,4 were chosen based on the mean and standarddeviation of each dataset. Suppose a dataset has the meanand standard deviation of first field µ and σ . Thus, (a1, a2,
a3, a4) is chosen as (µ-σ , µ-σ
2, µ+
σ
2, µ+σ). The reason for
the choice of these parameters is that each tuple in thedataset has a similar probability of taking on specific T, I andF values.
NRSM construction process: γ, β1, β2 and β3 were set to 1 bydefault.
λ -cutting process: The value of λ was randomly chosenbetween [0, 1]. We ran the code for several λ and took theaverage results.
VNU - 23/03/2017 82 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Experimental objective:
To compare the quality of all the clustering algorithms using threeindices: DB, SSWC and IVF [36]
Davies Bouldin (DB): Relates to the variance ratio criterion,which is based on the ratio between the distance of the innergroup and outer group. Particularly, the quality of thepartition is determined by the following formula:
(37) DB =1
k
k
∑l=1
Dl ,
VNU - 23/03/2017 83 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Experimental objective:
where:
(38) Dl = maxl 6=m{Dl ,m},
(39) Dl ,m = (d̄l + d̄m)/dm,l ,
where d̄l , d̄m are the average distances of clusters l andm,respectively, and dm,l is the distance between these clusters.
(40) d̄l =1
Nl∑xi∈Cl
||xi − x̄l ||;
The lower value of the DB criterion is better.
VNU - 23/03/2017 84 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Experimental objective:
Simplified silhouete width criterion (SSWC):
(41) SSWC =1
N
N
∑j=1
sxj ,
(42) sxj =bp,j −ap,j
max{ap,j ,bp,j}.
where ap,j is defined as the difference of object j relative to itscluster p. Similarly, dq,j is the difference of objects used to cluster jto q, q 6= p and bp,j . The idea is to replace the average distance bythe distance to the expected point.Using SSWC, a greater value shows more efficient algorithm.
VNU - 23/03/2017 85 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Experimental objective:
IFV:(43)
IFV =1
C
C
∑j=1
{ 1
N
N
∑k=1
u2kj [log2C −1
N
N
∑k=1
log2ukj ]2}× SDmax
σ̄D,
where:
(44) SDmax = maxk 6=j||Vk −Vj ||2,
(45) σ̄D =1
C
C
∑j=1
(1
N
N
∑k=1
||Xk −Vj ||2).
The maximal value of IFV indicates better performance.
VNU - 23/03/2017 86 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Clustering Result
(a) Sahin’s algorithm (b) Ye14’s algorithm
(c) Ye16’s algorithm (d) Our proposed algorithm
Figure 2: Clustering result of 4 methods with the diabetes dataset
VNU - 23/03/2017 87 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Clustering Result
(a) Sahin’s algorithm (b) Ye14’s algorithm
(c) Ye16’s algorithm (d) Our proposed algorithm
Figure 3: Clustering result of 4 methods with the breast dataset
VNU - 23/03/2017 88 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Clustering Result
(a) Sahin’s algorithm (b) Ye14’s algorithm
(c) Ye16’s algorithm(d) Our proposed algorithm
Figure 4: Clustering result of 4 methods with the RHC dataset
VNU - 23/03/2017 89 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Clustering Result
(a) Sahin’s algorithm (b) Ye14’s algorithm
(c) Ye16’s algorithm (d) Our proposed algorithm
Figure 5: Clustering result of 4 methods with the DMD dataset
VNU - 23/03/2017 90 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Comparative Results
Figure 6: Comparative results in diabetes dataset
VNU - 23/03/2017 91 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Comparative Results
Figure 7: Comparative results in breast dataset
VNU - 23/03/2017 92 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Comparative Results
Figure 8: Comparative results in RHC dataset
VNU - 23/03/2017 93 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Comparative Results
Figure 9: Comparative results in DMD dataset
VNU - 23/03/2017 94 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Comparative Results
Table 1: Comparative results of proposed method and existing methods(The values in bold font are the best for a given index and a dataset)
Dataset Method DB SSWC IFV
Average
Sahin 91.795 0.003 0.581Ye2014 25.360 0.070 108.383Ye2016 27.368 0.204 138.113
Our Method 21.426 0.267 232.362
VNU - 23/03/2017 95 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Table 2: Comparative time of the proposed method and existingmethods (sec)
Dataset No. of Data Sahin Ye14 Ye16 PM
Average
100 0.10 0.20 0.26 3.36200 0.32 0.85 1.38 13.28300 0.65 2.37 3.71 29.88400 1.07 4.98 8.85 62.04
VNU - 23/03/2017 96 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Conclusion and future work
VNU - 23/03/2017 97 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Conclusion
The results showed that our method outperforms the existingmethods in terms of clustering quality for all datasets.
The disadvantage is that the computational time of ourmethod is higher than those of the other methods.
VNU - 23/03/2017 98 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Future work
Improving the computational time of the clustering algorithm;
Designing a Bayesian-based approach to automaticallyestimate the parameter sets of the NRS and clusteringalgorithm;
Extending the NRS for multi-characteristic contexts;
Applying the multi-criteria NRS to solve sophisticatedproblems.
VNU - 23/03/2017 99 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Publication:
Thanh, N. D., Ali, M., Son, L. H. (2017). A Novel ClusteringAlgorithm in a Neutrosophic Recommender System for MedicalDiagnosis. Cognitive Computation, in press.
VNU - 23/03/2017 100 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
Thank you for watching!
VNU - 23/03/2017 101 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
References
VNU - 23/03/2017 102 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
References I
[1] Akhtar, N., Agarwal, N., & Burjwal, A. (2014). K-mean algorithm for ImageSegmentation using Neutrosophy. IEEE International Conference on Advances inComputing, Communications and Informatics (ICACCI 2014), 24172421.
[2] Ali, M., Son L.H., Thanh N.D & Minh N.V. (2017). A NeutrosophicRecommender System for Medical Diagnosis Based on Algebraic NeutrosophicMeasures. Applied Soft Computing(under 3rd revision).
[3] Broumi, S., & Smarandache, S. (2015). Extended Hausdorff Distance andSimilarity measure of Refined Neutrosophic Sets and their Application in MedicalDiagnosis. Journal of New Theory 1(7), 64-78.
[4] Connors, F. A., et al. (1996). The effectiveness of right heart catheterization inthe initial care of critically III patients. Jama 276(11), 889897.
[5] Davis, D. A., Chawla, N. V., Blumm,N., Christakis,N., & Barabsi,A. L. (2008).Predicting individual disease risk based on medical history. Proceedings of the17th ACM Conference on Information and Knowledge Management, 769778.
[6] Department of Biostatistics (2016). Vanderbilt University. Available at:http://biostat.mc.vanderbilt.edu/DataSets.
[7] Ding, S., Zhang, J., Jia, H., & Qian, J. (2016). An adaptive density data streamclustering algorithm. Cognitive Computation, 8(1), 30-38.
VNU - 23/03/2017 103 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
References II
[8] Duan, L., Street, W. N., & Xu, E. (2011). Healthcare information systems: datamining methods in the creation of a clinical recommender system. EnterpriseInform. Syst. 5(2), 169181.
[9] Farhadinia, B., & Xu, Z. (2017). Distance and Aggregation-Based Methodologiesfor Hesitant Fuzzy Decision Making. Cognitive Computation,doi:10.1007/s12559-016-9436-2.
[10] George J. K., & Bo Y. (1995). Fuzzy sets and fuzzy logic: Theory andapplications. Prentice Hall, Upper Saddle River, New Jersey.
[11] Guo, Y., Zhou, C., Chan, H. P., Chughtai, A., Wei, J., Hadjiiski, L.M., &Kazerooni, E.A. (2013). Automated iterative neutrosophic lung segmentation forimage analysis in thoracic computed tomography. Med Phys 40(8), doi:10.1118/1.4812679.
[12] Guo, Y., & Sengur, A. (2015). NCM: Neutrosophic c-means clustering algorithm.Pattern Recognition, 48(8), 2710-2724.
[13] Hassan, S., & Syed, Z. (2010). From netflix to heart attacks: collaborativefiltering in medical datasets. Proceedings of the 1st ACM International HealthInformatics Symposium, 128134.
VNU - 23/03/2017 104 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
References III
[14] Hong, C., Yu, J., Wan, J., Tao, D., & Wang, M. (2015). Multimodal deepautoencoder for human pose recovery. IEEE Transactions on Image Processing,24(12), 5659-5670.
[15] Hong, C., Yu, J., Tao, D., & Wang, M. (2015). Image-based three-dimensionalhuman pose recovery by multiview locality-sensitive sparse retrieval. IEEETransactions on Industrial Electronics, 62(6), 3742-3751.
[16] Jia, H., Ding, S., & Du, M. (2015). Self-tuning p-Spectral clustering based onshared nearest neighbors. Cognitive Computation, 7(5), 622-632.
[17] Kala, R., Janghel, R. R., Tiwari, R. & Shukla, A. (2011). Diagnosis of breastcancer by modular evolutionary neural networks. Int. J. Biomed. Eng. Technol.7(2), 194211.
[18] Kononenko,Y. (2001). Machine learning for medical diagnosis: history, state ofthe art and perspective. Artif. Intell. Med. 23(1), 89109.
[19] Lee, W. P., & Lin, C. H. (2016). Combining expression data and knowledgeontology for gene clustering and network reconstruction. Cognitive Computation,8(2), 217-227.
VNU - 23/03/2017 105 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
References IV
[20] Liu, P., & Tang, G. (2016). Multi-criteria Group Decision-Making Based onInterval Neutrosophic Uncertain Linguistic Variables and Choquet Integral.Cognitive Computation, 8(6), 1036-1056.
[21] Mahdavi, M. M. (2012). Implementation of a recommender system on medicalrecognition and treatment. Int. J. e-Education, e-Business, e-Managemente-Learning 2(4), 315318.
[22] Moein, S., Monadjemi, S. A., & Moallem, P. (2009). A novel fuzzy-neural basedmedical diagnosis system. Int. J. Biol. Med. Sci. 4(3), 146150.
[23] Own, C. M. (2009). Switching between type-2 fuzzy sets and intuitionistic fuzzysets: an application in medical diagnosis. Appl. Intell. 31(3), 283291.
[24] Sahin R. (2014). Neutrosophic Hierarchical Clustering Algorithms. NeutrosophicSets and Systems, 2, 18-24.
[25] Samuel, A. E. & Balamurugan, M. (2012). Fuzzy maxmin composition techniquein medical diagnosis, Appl. Math. Sci. 6(35), 17411746.
[26] Shinoj, T. K., John, S. J. (2012). Intuitionistic Fuzzy Multi sets and itsApplication in Medical Diagnosis. World Acad. Sci. Eng. Technol. 6, 14181421.
[27] Smarandache, F. (1998). A Unifying Field in Logics. Neutrosophy: NeutrosophicProbability, Set and Logic, Rehoboth: American Research Press.
VNU - 23/03/2017 106 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
References V
[28] Szmidt, E., & Kacprzyk, J. (2001). Intuitionistic fuzzy sets in some medicalapplications. Proceeding of Computational Intelligence: Theory and Applications,148151.
[29] Szmidt, E., & Kacprzyk, J. (2003). An intuitionistic fuzzy set based approach tointelligent data analysis: an application to medical diagnosis. Proceeding ofRecent Advances in Intelligent Paradigms an Applications, 5770.
[30] Szmidt, E., & Kacprzyk, J. (2004). A similarity measure for intuitionistic fuzzysets and its application in supporting medical diagnostic reasoning. Proceeding ofArtificial Intelligence and Soft Computing 2004, 388-393.
[31] Son, L.H., Thong, N.T. (2015). Intuitionistic Fuzzy Recommender Systems: AnEffective Tool for Medical Diagnosis. Knowledge-Based Systems 74, 133150.
[32] Son, L.H., Tuan, T.M. (2016). A cooperative semi-supervised fuzzy clusteringframework for dental X-ray image segmentation. Expert Systems WithApplications 46, 380 393. bibitemThong1Thong, N.T., Son, L.H. (2015).HIFCF: An effective hybrid model between picture fuzzy clustering andintuitionistic fuzzy recommender systems for medical diagnosis. Expert SystemsWith Applications 42(7), 3682-3701.
VNU - 23/03/2017 107 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
References VI
[33] Tuan, T.M., Duc, N.T., Hai, P.V., Son, L.H. (2017). Dental diagnosis from x-rayimages using fuzzy rule-based systems. International Journal of Fuzzy SystemApplications 6(1), 1-16.
[34] Tuan, T.M., Ngan, T.T., Son, L.H. (2016). A Novel Semi-Supervised FuzzyClustering Method based on Interactive Fuzzy Satisficing for Dental X-Ray ImageSegmentation. Applied Intelligence 45(2), 402-428.
[35] Tuan, T.M., Son, L.H. (2016). A novel framework using graph-based clusteringfor dental x-ray image search in medical diagnosis. International Journal ofEngineering and Technology 8(6), 422 427.
[36] Vendramin, L., Campello, R. J., & Hruschka, E. R. (2010). Relative clusteringvalidity criteria: a comparative overview. Statistical Analysis and Data Mining:The ASA Data Science Journal, 3(4), 209235.
[37] Yao, Y. (2016). Three-way decisions and cognitive computing. CognitiveComputation, 8(4), 543-554.
[38] Ye, J. (2007). Improved cosine similarity measures of simplified neutrosophic setsfor medical diagnoses. Artificial Intelligence in Medicine 63, 171179.
VNU - 23/03/2017 108 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
References VII
[39] Ye, S., & Ye, J. (2014). Dice Similarity Measure between Single ValuedNeutrosophic Multisets and Its Application in Medical Diagnosis. NeutrosophicSets and Systems 6, 4853.
[40] Ye, J. (2014). A netting method for clustering-simplied neutrosophic information.Journal of Intelligent Systems 23(4), 379389.
[41] Ye, S., Fu, J. & Ye, J. (2015). Medical Diagnosis Using Distance-BasedSimilarity Measures of Single Valued Neutrosophic Multisets. Neutrosophic Setsand Systems 07, 47-54.
[42] Ye, J., & Fu, J. (2015). Multi-period medical diagnosis method using a singlevalued neutrosophic similarity measure based on tangent function. ComputerMethods and Programs in Biomedicine 123, 142-149.
[43] Ye, J. (2014). Clustering Methods Using Distance-Based Similarity Measures ofSingle-Valued Neutrosophic Sets. Journal of Intelligent Systems 23(4), 379-389.
[44] Yu, J., Rui, Y., Tang, Y. Y., & Tao, D. (2014). High-order distance-basedmultiview stochastic learning in image classification. IEEE transactions oncybernetics, 44(12), 2431-2442.
VNU - 23/03/2017 109 / 110
Introduction Background Clustering algorithm Results and discussion Conclusion and future work
References VIII
[45] Yu, J., Zhang, B., Kuang, Z., Lin, D., & Fan, J. (2016). iPrivacy: Image PrivacyProtection by Identifying Sensitive Objects via Deep Multi-Task Learning. IEEETransactions on Information Forensics and Security, doi:10.1109/TIFS.2016.2636090.
[46] Yu, J., Yang, X., Gao, F., & Tao, D. (2017). Deep multimodal distance metriclearning using click constraints for image ranking. IEEE transactions oncybernetics, doi: 10.1109/TCYB.2016.2591583.
[47] Zhang, M., Zhang, L., & Cheng, H., Segmentation of ultrasound breast imagesbased on a neutrosophic method, Opt. Eng. 49(11) 117001, (2010), doi:10.1117/1.3505
[48] Zhang, H. Y., Ji, P., Wang, J. Q., & Chen, X. H. (2016). A neutrosophic normalcloud and its application in decision-making. Cognitive Computation, 8(4),649-669.
VNU - 23/03/2017 110 / 110