rank minimization over finite fields - ece@nus · rank minimization over finite fields vincent y....
TRANSCRIPT
![Page 1: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/1.jpg)
Rank Minimization over Finite Fields
Vincent Y. F. Tan
Laura Balzano, Stark C. Draper
Department of Electrical and Computer Engineering,University of Wisconsin-Madison
ISIT 2011
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 1 / 20
![Page 2: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/2.jpg)
Connection between Two Areas of Study
Area of Matrix Completion Rank-Metric CodesStudy Rank Minimization
Applications Collaborative Filtering Crisscross Error CorrectionMinimal Realization Network Coding
Field Real R, Complex C Finite Field Fq
Techniques Functional Analysis Algebraic coding
Decoding Convex Optimization Berlekamp-MasseyNuclear Norm Error Trapping
Can we draw analogies between the two areas of study?
Rank Minimization over finite fields
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 2 / 20
![Page 3: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/3.jpg)
Connection between Two Areas of Study
Area of Matrix Completion Rank-Metric CodesStudy Rank Minimization
Applications Collaborative Filtering Crisscross Error CorrectionMinimal Realization Network Coding
Field Real R, Complex C Finite Field Fq
Techniques Functional Analysis Algebraic coding
Decoding Convex Optimization Berlekamp-MasseyNuclear Norm Error Trapping
Can we draw analogies between the two areas of study?
Rank Minimization over finite fields
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 2 / 20
![Page 4: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/4.jpg)
Connection between Two Areas of Study
Area of Matrix Completion Rank-Metric CodesStudy Rank Minimization
Applications Collaborative Filtering Crisscross Error CorrectionMinimal Realization Network Coding
Field Real R, Complex C Finite Field Fq
Techniques Functional Analysis Algebraic coding
Decoding Convex Optimization Berlekamp-MasseyNuclear Norm Error Trapping
Can we draw analogies between the two areas of study?
Rank Minimization over finite fields
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 2 / 20
![Page 5: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/5.jpg)
Connection between Two Areas of Study
Area of Matrix Completion Rank-Metric CodesStudy Rank Minimization
Applications Collaborative Filtering Crisscross Error CorrectionMinimal Realization Network Coding
Field Real R, Complex C Finite Field Fq
Techniques Functional Analysis Algebraic coding
Decoding Convex Optimization Berlekamp-MasseyNuclear Norm Error Trapping
Can we draw analogies between the two areas of study?
Rank Minimization over finite fields
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 2 / 20
![Page 6: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/6.jpg)
Connection between Two Areas of Study
Area of Matrix Completion Rank-Metric CodesStudy Rank Minimization
Applications Collaborative Filtering Crisscross Error CorrectionMinimal Realization Network Coding
Field Real R, Complex C Finite Field Fq
Techniques Functional Analysis Algebraic coding
Decoding Convex Optimization Berlekamp-MasseyNuclear Norm Error Trapping
Can we draw analogies between the two areas of study?
Rank Minimization over finite fields
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 2 / 20
![Page 7: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/7.jpg)
Connection between Two Areas of Study
Area of Matrix Completion Rank-Metric CodesStudy Rank Minimization
Applications Collaborative Filtering Crisscross Error CorrectionMinimal Realization Network Coding
Field Real R, Complex C Finite Field Fq
Techniques Functional Analysis Algebraic coding
Decoding Convex Optimization Berlekamp-MasseyNuclear Norm Error Trapping
Can we draw analogies between the two areas of study?
Rank Minimization over finite fields
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 2 / 20
![Page 8: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/8.jpg)
Connection between Two Areas of Study
Area of Matrix Completion Rank-Metric CodesStudy Rank Minimization
Applications Collaborative Filtering Crisscross Error CorrectionMinimal Realization Network Coding
Field Real R, Complex C Finite Field Fq
Techniques Functional Analysis Algebraic coding
Decoding Convex Optimization Berlekamp-MasseyNuclear Norm Error Trapping
Can we draw analogies between the two areas of study?
Rank Minimization over finite fieldsVincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 2 / 20
![Page 9: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/9.jpg)
Minimum rank distance decoding
Transmit some codeword C ∈ C
A rank-metric code C is a non-empty subset of Fm×nq endowed with
the rank distance
Receive some received matrix R ∈ Fm×nq
Decoding problem (under mild conditions) is
C = arg minC∈C
rank(R− C)
Minimum distance decoding since rank induces a metric
X ≡ R− C is known as the error matrix – assumed low-rank
Assume linear code. Rank minimization over finite field:
X = arg minX∈ coset
rank(X)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 3 / 20
![Page 10: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/10.jpg)
Minimum rank distance decoding
Transmit some codeword C ∈ C
A rank-metric code C is a non-empty subset of Fm×nq endowed with
the rank distance
Receive some received matrix R ∈ Fm×nq
Decoding problem (under mild conditions) is
C = arg minC∈C
rank(R− C)
Minimum distance decoding since rank induces a metric
X ≡ R− C is known as the error matrix – assumed low-rank
Assume linear code. Rank minimization over finite field:
X = arg minX∈ coset
rank(X)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 3 / 20
![Page 11: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/11.jpg)
Minimum rank distance decoding
Transmit some codeword C ∈ C
A rank-metric code C is a non-empty subset of Fm×nq endowed with
the rank distance
Receive some received matrix R ∈ Fm×nq
Decoding problem (under mild conditions) is
C = arg minC∈C
rank(R− C)
Minimum distance decoding since rank induces a metric
X ≡ R− C is known as the error matrix – assumed low-rank
Assume linear code. Rank minimization over finite field:
X = arg minX∈ coset
rank(X)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 3 / 20
![Page 12: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/12.jpg)
Minimum rank distance decoding
Transmit some codeword C ∈ C
A rank-metric code C is a non-empty subset of Fm×nq endowed with
the rank distance
Receive some received matrix R ∈ Fm×nq
Decoding problem (under mild conditions) is
C = arg minC∈C
rank(R− C)
Minimum distance decoding since rank induces a metric
X ≡ R− C is known as the error matrix – assumed low-rank
Assume linear code. Rank minimization over finite field:
X = arg minX∈ coset
rank(X)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 3 / 20
![Page 13: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/13.jpg)
Minimum rank distance decoding
Transmit some codeword C ∈ C
A rank-metric code C is a non-empty subset of Fm×nq endowed with
the rank distance
Receive some received matrix R ∈ Fm×nq
Decoding problem (under mild conditions) is
C = arg minC∈C
rank(R− C)
Minimum distance decoding since rank induces a metric
X ≡ R− C is known as the error matrix – assumed low-rank
Assume linear code. Rank minimization over finite field:
X = arg minX∈ coset
rank(X)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 3 / 20
![Page 14: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/14.jpg)
When do low-rank errors occur?
Probabilistic crisscross error correction
Roth, IT Trans 1997
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 43, NO. 5, SEPTEMBER 1997 1425
Probabilistic Crisscross Error CorrectionRon M. Roth, Member, IEEE
Abstract—The crisscross error model in data arrays is consid-ered, where the corrupted symbols are confined to a prescribednumber of rows or columns (or both). Under the additionalassumption that the corrupted entries are uniformly distributedover the channel alphabet, and by allowing a small decoding errorprobability, a coding scheme is presented where the redundancycan get close to one half the redundancy required in minimum-distance decoding of crisscross errors.
Index Terms—Crisscross errors, probabilistic coding, rankmetric.
I. INTRODUCTION
CONSIDER an application where information symbols(such as bits or bytes) are stored in arrays, with the
possibility of some of the symbols recorded erroneously. Theerror patterns are such that all corrupted symbols are confinedto a prescribed number of rows or columns (or both). Werefer to such an error model ascrisscross errors. A crisscrosserror pattern that is confined to two rows and three columnsis shown in Fig. 1.
Crisscross errors can be found in various data storageapplications; see, for instance, [3], [5], [6], [11], [16]–[19].Such errors may occur in memory chip arrays, where rowor column failures occur due to the malfunctioning of rowdrivers or column amplifiers. Crisscross errors can also befound in helical tapes, where the tracks are recorded in adirection which is (conceptually) perpendicular to the directionof the movement of the tape; misalignment of the reading headcauses burst errors to occur along the track (and across thetape), whereas scratches on the tape usually occur along thetape (and across the tracks). Crisscross error-correcting codescan also be applied in linear magnetic tapes, where the tracksare written along the direction of the movement of the tapeand, therefore, scratches cause bursts to occur along the tracks;still, the information and check symbols are usually recordedacross the tracks. Computation of check symbols is equivalentto decoding of erasures at the check locations, and in this casethese erasures are perpendicular to the erroneous tracks.
Crisscross errors can be analyzed through the followingcover metric. Acover of an array over a fieldis a set of rows or columns that contain all the nonzero entriesin . The cover weightof , denotedw , is the sizeof the smallest cover of . The cover distancebetween two
Manuscript received October 23, 1995; revised February 18, 1997. Part ofthis work was performed at Hewlett-Packard Israel Science Center, Haifa,Israel. The material in this paper was presented in part at the 34th AnnualAllerton Conference on Communication, Control, and Computing, Urbana-Champaign, IL, October 2–4, 1996.
The author is with Hewlett-Packard Laboratories, Palo Alto, CA 94304USA, on leave from the Computer Science Department, Technion–IsraelInstitute of Technology, Haifa 32000, Israel,
Publisher Item Identifier S 0018-9448(97)05407-2.
Fig. 1. Typical crisscross error pattern.
arrays over is the cover weight of their difference. Anarray code over is a -dimensional linear
subspace of the vector space of all matrices oversuch that is the smallest cover distance between any
two distinct elements of or, equivalently, the smallest coverweight of any nonzero element of. The parameter isreferred to as theminimum cover distanceof and the term
stands for theredundancyof .The Singleton bound on the minimum cover distance states
that the minimum cover distance and the redundancy of anyarray code over a field satisfy the relation
(1)
where we assume that (see [9] and [19]).Let be the “transmitted” array and be the
“received” array, where is the error array. The number ofcrisscross errors is bounded from below byw . Sincecover distance is a metric, then by using anarray code, we can recover any pattern of up tocrisscross errors. On the other hand, if we wish to be ableto recoverany pattern of up to crisscross errors, then wemustuse an array code with minimum cover distance whichis at least . The Singleton bound on the minimum coverdistance implies that the number of redundancy symbols mustbe at least , namely, at least twice the maximum numberof erroneous symbols that need to be corrected.
A simple constructive technique to combat crisscross errorsis through theskewing methodwhich we explain next (see[9], [19], [20]). Let be a conventional linear
0018–9448/97$10.00 1997 IEEE
Data storage applications: Data stored in arrays
Error patterns confined to two rows and three columns above
Low-rank errors
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 4 / 20
![Page 15: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/15.jpg)
When do low-rank errors occur?
Probabilistic crisscross error correction
Roth, IT Trans 1997
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 43, NO. 5, SEPTEMBER 1997 1425
Probabilistic Crisscross Error CorrectionRon M. Roth, Member, IEEE
Abstract—The crisscross error model in data arrays is consid-ered, where the corrupted symbols are confined to a prescribednumber of rows or columns (or both). Under the additionalassumption that the corrupted entries are uniformly distributedover the channel alphabet, and by allowing a small decoding errorprobability, a coding scheme is presented where the redundancycan get close to one half the redundancy required in minimum-distance decoding of crisscross errors.
Index Terms—Crisscross errors, probabilistic coding, rankmetric.
I. INTRODUCTION
CONSIDER an application where information symbols(such as bits or bytes) are stored in arrays, with the
possibility of some of the symbols recorded erroneously. Theerror patterns are such that all corrupted symbols are confinedto a prescribed number of rows or columns (or both). Werefer to such an error model ascrisscross errors. A crisscrosserror pattern that is confined to two rows and three columnsis shown in Fig. 1.
Crisscross errors can be found in various data storageapplications; see, for instance, [3], [5], [6], [11], [16]–[19].Such errors may occur in memory chip arrays, where rowor column failures occur due to the malfunctioning of rowdrivers or column amplifiers. Crisscross errors can also befound in helical tapes, where the tracks are recorded in adirection which is (conceptually) perpendicular to the directionof the movement of the tape; misalignment of the reading headcauses burst errors to occur along the track (and across thetape), whereas scratches on the tape usually occur along thetape (and across the tracks). Crisscross error-correcting codescan also be applied in linear magnetic tapes, where the tracksare written along the direction of the movement of the tapeand, therefore, scratches cause bursts to occur along the tracks;still, the information and check symbols are usually recordedacross the tracks. Computation of check symbols is equivalentto decoding of erasures at the check locations, and in this casethese erasures are perpendicular to the erroneous tracks.
Crisscross errors can be analyzed through the followingcover metric. Acover of an array over a fieldis a set of rows or columns that contain all the nonzero entriesin . The cover weightof , denotedw , is the sizeof the smallest cover of . The cover distancebetween two
Manuscript received October 23, 1995; revised February 18, 1997. Part ofthis work was performed at Hewlett-Packard Israel Science Center, Haifa,Israel. The material in this paper was presented in part at the 34th AnnualAllerton Conference on Communication, Control, and Computing, Urbana-Champaign, IL, October 2–4, 1996.
The author is with Hewlett-Packard Laboratories, Palo Alto, CA 94304USA, on leave from the Computer Science Department, Technion–IsraelInstitute of Technology, Haifa 32000, Israel,
Publisher Item Identifier S 0018-9448(97)05407-2.
Fig. 1. Typical crisscross error pattern.
arrays over is the cover weight of their difference. Anarray code over is a -dimensional linear
subspace of the vector space of all matrices oversuch that is the smallest cover distance between any
two distinct elements of or, equivalently, the smallest coverweight of any nonzero element of. The parameter isreferred to as theminimum cover distanceof and the term
stands for theredundancyof .The Singleton bound on the minimum cover distance states
that the minimum cover distance and the redundancy of anyarray code over a field satisfy the relation
(1)
where we assume that (see [9] and [19]).Let be the “transmitted” array and be the
“received” array, where is the error array. The number ofcrisscross errors is bounded from below byw . Sincecover distance is a metric, then by using anarray code, we can recover any pattern of up tocrisscross errors. On the other hand, if we wish to be ableto recoverany pattern of up to crisscross errors, then wemustuse an array code with minimum cover distance whichis at least . The Singleton bound on the minimum coverdistance implies that the number of redundancy symbols mustbe at least , namely, at least twice the maximum numberof erroneous symbols that need to be corrected.
A simple constructive technique to combat crisscross errorsis through theskewing methodwhich we explain next (see[9], [19], [20]). Let be a conventional linear
0018–9448/97$10.00 1997 IEEE
Data storage applications: Data stored in arrays
Error patterns confined to two rows and three columns above
Low-rank errors
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 4 / 20
![Page 16: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/16.jpg)
When do low-rank errors occur?
Probabilistic crisscross error correction
Roth, IT Trans 1997
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 43, NO. 5, SEPTEMBER 1997 1425
Probabilistic Crisscross Error CorrectionRon M. Roth, Member, IEEE
Abstract—The crisscross error model in data arrays is consid-ered, where the corrupted symbols are confined to a prescribednumber of rows or columns (or both). Under the additionalassumption that the corrupted entries are uniformly distributedover the channel alphabet, and by allowing a small decoding errorprobability, a coding scheme is presented where the redundancycan get close to one half the redundancy required in minimum-distance decoding of crisscross errors.
Index Terms—Crisscross errors, probabilistic coding, rankmetric.
I. INTRODUCTION
CONSIDER an application where information symbols(such as bits or bytes) are stored in arrays, with the
possibility of some of the symbols recorded erroneously. Theerror patterns are such that all corrupted symbols are confinedto a prescribed number of rows or columns (or both). Werefer to such an error model ascrisscross errors. A crisscrosserror pattern that is confined to two rows and three columnsis shown in Fig. 1.
Crisscross errors can be found in various data storageapplications; see, for instance, [3], [5], [6], [11], [16]–[19].Such errors may occur in memory chip arrays, where rowor column failures occur due to the malfunctioning of rowdrivers or column amplifiers. Crisscross errors can also befound in helical tapes, where the tracks are recorded in adirection which is (conceptually) perpendicular to the directionof the movement of the tape; misalignment of the reading headcauses burst errors to occur along the track (and across thetape), whereas scratches on the tape usually occur along thetape (and across the tracks). Crisscross error-correcting codescan also be applied in linear magnetic tapes, where the tracksare written along the direction of the movement of the tapeand, therefore, scratches cause bursts to occur along the tracks;still, the information and check symbols are usually recordedacross the tracks. Computation of check symbols is equivalentto decoding of erasures at the check locations, and in this casethese erasures are perpendicular to the erroneous tracks.
Crisscross errors can be analyzed through the followingcover metric. Acover of an array over a fieldis a set of rows or columns that contain all the nonzero entriesin . The cover weightof , denotedw , is the sizeof the smallest cover of . The cover distancebetween two
Manuscript received October 23, 1995; revised February 18, 1997. Part ofthis work was performed at Hewlett-Packard Israel Science Center, Haifa,Israel. The material in this paper was presented in part at the 34th AnnualAllerton Conference on Communication, Control, and Computing, Urbana-Champaign, IL, October 2–4, 1996.
The author is with Hewlett-Packard Laboratories, Palo Alto, CA 94304USA, on leave from the Computer Science Department, Technion–IsraelInstitute of Technology, Haifa 32000, Israel,
Publisher Item Identifier S 0018-9448(97)05407-2.
Fig. 1. Typical crisscross error pattern.
arrays over is the cover weight of their difference. Anarray code over is a -dimensional linear
subspace of the vector space of all matrices oversuch that is the smallest cover distance between any
two distinct elements of or, equivalently, the smallest coverweight of any nonzero element of. The parameter isreferred to as theminimum cover distanceof and the term
stands for theredundancyof .The Singleton bound on the minimum cover distance states
that the minimum cover distance and the redundancy of anyarray code over a field satisfy the relation
(1)
where we assume that (see [9] and [19]).Let be the “transmitted” array and be the
“received” array, where is the error array. The number ofcrisscross errors is bounded from below byw . Sincecover distance is a metric, then by using anarray code, we can recover any pattern of up tocrisscross errors. On the other hand, if we wish to be ableto recoverany pattern of up to crisscross errors, then wemustuse an array code with minimum cover distance whichis at least . The Singleton bound on the minimum coverdistance implies that the number of redundancy symbols mustbe at least , namely, at least twice the maximum numberof erroneous symbols that need to be corrected.
A simple constructive technique to combat crisscross errorsis through theskewing methodwhich we explain next (see[9], [19], [20]). Let be a conventional linear
0018–9448/97$10.00 1997 IEEE
Data storage applications: Data stored in arrays
Error patterns confined to two rows and three columns above
Low-rank errors
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 4 / 20
![Page 17: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/17.jpg)
When do low-rank errors occur?
Probabilistic crisscross error correction
Roth, IT Trans 1997
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 43, NO. 5, SEPTEMBER 1997 1425
Probabilistic Crisscross Error CorrectionRon M. Roth, Member, IEEE
Abstract—The crisscross error model in data arrays is consid-ered, where the corrupted symbols are confined to a prescribednumber of rows or columns (or both). Under the additionalassumption that the corrupted entries are uniformly distributedover the channel alphabet, and by allowing a small decoding errorprobability, a coding scheme is presented where the redundancycan get close to one half the redundancy required in minimum-distance decoding of crisscross errors.
Index Terms—Crisscross errors, probabilistic coding, rankmetric.
I. INTRODUCTION
CONSIDER an application where information symbols(such as bits or bytes) are stored in arrays, with the
possibility of some of the symbols recorded erroneously. Theerror patterns are such that all corrupted symbols are confinedto a prescribed number of rows or columns (or both). Werefer to such an error model ascrisscross errors. A crisscrosserror pattern that is confined to two rows and three columnsis shown in Fig. 1.
Crisscross errors can be found in various data storageapplications; see, for instance, [3], [5], [6], [11], [16]–[19].Such errors may occur in memory chip arrays, where rowor column failures occur due to the malfunctioning of rowdrivers or column amplifiers. Crisscross errors can also befound in helical tapes, where the tracks are recorded in adirection which is (conceptually) perpendicular to the directionof the movement of the tape; misalignment of the reading headcauses burst errors to occur along the track (and across thetape), whereas scratches on the tape usually occur along thetape (and across the tracks). Crisscross error-correcting codescan also be applied in linear magnetic tapes, where the tracksare written along the direction of the movement of the tapeand, therefore, scratches cause bursts to occur along the tracks;still, the information and check symbols are usually recordedacross the tracks. Computation of check symbols is equivalentto decoding of erasures at the check locations, and in this casethese erasures are perpendicular to the erroneous tracks.
Crisscross errors can be analyzed through the followingcover metric. Acover of an array over a fieldis a set of rows or columns that contain all the nonzero entriesin . The cover weightof , denotedw , is the sizeof the smallest cover of . The cover distancebetween two
Manuscript received October 23, 1995; revised February 18, 1997. Part ofthis work was performed at Hewlett-Packard Israel Science Center, Haifa,Israel. The material in this paper was presented in part at the 34th AnnualAllerton Conference on Communication, Control, and Computing, Urbana-Champaign, IL, October 2–4, 1996.
The author is with Hewlett-Packard Laboratories, Palo Alto, CA 94304USA, on leave from the Computer Science Department, Technion–IsraelInstitute of Technology, Haifa 32000, Israel,
Publisher Item Identifier S 0018-9448(97)05407-2.
Fig. 1. Typical crisscross error pattern.
arrays over is the cover weight of their difference. Anarray code over is a -dimensional linear
subspace of the vector space of all matrices oversuch that is the smallest cover distance between any
two distinct elements of or, equivalently, the smallest coverweight of any nonzero element of. The parameter isreferred to as theminimum cover distanceof and the term
stands for theredundancyof .The Singleton bound on the minimum cover distance states
that the minimum cover distance and the redundancy of anyarray code over a field satisfy the relation
(1)
where we assume that (see [9] and [19]).Let be the “transmitted” array and be the
“received” array, where is the error array. The number ofcrisscross errors is bounded from below byw . Sincecover distance is a metric, then by using anarray code, we can recover any pattern of up tocrisscross errors. On the other hand, if we wish to be ableto recoverany pattern of up to crisscross errors, then wemustuse an array code with minimum cover distance whichis at least . The Singleton bound on the minimum coverdistance implies that the number of redundancy symbols mustbe at least , namely, at least twice the maximum numberof erroneous symbols that need to be corrected.
A simple constructive technique to combat crisscross errorsis through theskewing methodwhich we explain next (see[9], [19], [20]). Let be a conventional linear
0018–9448/97$10.00 1997 IEEE
Data storage applications: Data stored in arrays
Error patterns confined to two rows and three columns above
Low-rank errors
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 4 / 20
![Page 18: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/18.jpg)
Problem Setup: Rank minimization over finite fields
Square matrix X ∈ Fn×nq has rank ≤ r ≤ n
!!!"Assume that the sensing matrices H1, . . . ,Hk ∈ Fn×n
q are random.
Arithmetic is performed in the field Fq ⇒ ya ∈ Fq
Given (yk,Hk), find necessary and sufficient conditions on k andsensing model such that recovery is reliable, i.e., P(En)→ 0
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 5 / 20
![Page 19: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/19.jpg)
Problem Setup: Rank minimization over finite fields
Square matrix X ∈ Fn×nq has rank ≤ r ≤ n
!!!"
Assume that the sensing matrices H1, . . . ,Hk ∈ Fn×nq are random.
Arithmetic is performed in the field Fq ⇒ ya ∈ Fq
Given (yk,Hk), find necessary and sufficient conditions on k andsensing model such that recovery is reliable, i.e., P(En)→ 0
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 5 / 20
![Page 20: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/20.jpg)
Problem Setup: Rank minimization over finite fields
Square matrix X ∈ Fn×nq has rank ≤ r ≤ n
!!!"Assume that the sensing matrices H1, . . . ,Hk ∈ Fn×n
q are random.
Arithmetic is performed in the field Fq ⇒ ya ∈ Fq
Given (yk,Hk), find necessary and sufficient conditions on k andsensing model such that recovery is reliable, i.e., P(En)→ 0
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 5 / 20
![Page 21: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/21.jpg)
Problem Setup: Rank minimization over finite fields
Square matrix X ∈ Fn×nq has rank ≤ r ≤ n
!!!"Assume that the sensing matrices H1, . . . ,Hk ∈ Fn×n
q are random.
Arithmetic is performed in the field Fq ⇒ ya ∈ Fq
Given (yk,Hk), find necessary and sufficient conditions on k andsensing model such that recovery is reliable, i.e., P(En)→ 0
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 5 / 20
![Page 22: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/22.jpg)
Problem Setup: Rank minimization over finite fields
Square matrix X ∈ Fn×nq has rank ≤ r ≤ n
!!!"Assume that the sensing matrices H1, . . . ,Hk ∈ Fn×n
q are random.
Arithmetic is performed in the field Fq ⇒ ya ∈ Fq
Given (yk,Hk), find necessary and sufficient conditions on k andsensing model such that recovery is reliable, i.e., P(En)→ 0
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 5 / 20
![Page 23: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/23.jpg)
Main Results
k: Num. of linear measurementsn: Dim. of matrix Xr: Max. rank of matrix X
γ = rn : Rank-dimension ratio
Critical Quantity
2γ (1− γ/2) n2
= 2rn− r2
Result Statement Consequence
Converse k < (2− ε)γ(1− γ/2)n2 P(En) 9 0P(En)→ 0
Achievability (Uniform) k > (2 + ε)γ(1− γ/2)n2 P(En) ≈ q−n2E(R)
Achievability (Sparse) k > (2 + ε)γ(1− γ/2)n2 P(En)→ 0
Achievability (Noisy) k ' (3 + ε)(γ + σ)n2 P(En)→ 0(q assumed large)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 6 / 20
![Page 24: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/24.jpg)
Main Results
k: Num. of linear measurementsn: Dim. of matrix Xr: Max. rank of matrix Xγ = r
n : Rank-dimension ratio
Critical Quantity
2γ (1− γ/2) n2
= 2rn− r2
Result Statement Consequence
Converse k < (2− ε)γ(1− γ/2)n2 P(En) 9 0P(En)→ 0
Achievability (Uniform) k > (2 + ε)γ(1− γ/2)n2 P(En) ≈ q−n2E(R)
Achievability (Sparse) k > (2 + ε)γ(1− γ/2)n2 P(En)→ 0
Achievability (Noisy) k ' (3 + ε)(γ + σ)n2 P(En)→ 0(q assumed large)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 6 / 20
![Page 25: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/25.jpg)
Main Results
k: Num. of linear measurementsn: Dim. of matrix Xr: Max. rank of matrix Xγ = r
n : Rank-dimension ratio
Critical Quantity
2γ (1− γ/2) n2
= 2rn− r2
Result Statement Consequence
Converse k < (2− ε)γ(1− γ/2)n2 P(En) 9 0P(En)→ 0
Achievability (Uniform) k > (2 + ε)γ(1− γ/2)n2 P(En) ≈ q−n2E(R)
Achievability (Sparse) k > (2 + ε)γ(1− γ/2)n2 P(En)→ 0
Achievability (Noisy) k ' (3 + ε)(γ + σ)n2 P(En)→ 0(q assumed large)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 6 / 20
![Page 26: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/26.jpg)
Main Results
k: Num. of linear measurementsn: Dim. of matrix Xr: Max. rank of matrix Xγ = r
n : Rank-dimension ratio
Critical Quantity
2γ (1− γ/2) n2
= 2rn− r2
Result Statement Consequence
Converse k < (2− ε)γ(1− γ/2)n2 P(En) 9 0P(En)→ 0
Achievability (Uniform) k > (2 + ε)γ(1− γ/2)n2 P(En) ≈ q−n2E(R)
Achievability (Sparse) k > (2 + ε)γ(1− γ/2)n2 P(En)→ 0
Achievability (Noisy) k ' (3 + ε)(γ + σ)n2 P(En)→ 0(q assumed large)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 6 / 20
![Page 27: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/27.jpg)
Main Results
k: Num. of linear measurementsn: Dim. of matrix Xr: Max. rank of matrix Xγ = r
n : Rank-dimension ratio
Critical Quantity
2γ (1− γ/2) n2
= 2rn− r2
Result Statement Consequence
Converse k < (2− ε)γ(1− γ/2)n2 P(En) 9 0
P(En)→ 0Achievability (Uniform) k > (2 + ε)γ(1− γ/2)n2 P(En) ≈ q−n2E(R)
Achievability (Sparse) k > (2 + ε)γ(1− γ/2)n2 P(En)→ 0
Achievability (Noisy) k ' (3 + ε)(γ + σ)n2 P(En)→ 0(q assumed large)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 6 / 20
![Page 28: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/28.jpg)
Main Results
k: Num. of linear measurementsn: Dim. of matrix Xr: Max. rank of matrix Xγ = r
n : Rank-dimension ratio
Critical Quantity
2γ (1− γ/2) n2
= 2rn− r2
Result Statement Consequence
Converse k < (2− ε)γ(1− γ/2)n2 P(En) 9 0P(En)→ 0
Achievability (Uniform) k > (2 + ε)γ(1− γ/2)n2 P(En) ≈ q−n2E(R)
Achievability (Sparse) k > (2 + ε)γ(1− γ/2)n2 P(En)→ 0
Achievability (Noisy) k ' (3 + ε)(γ + σ)n2 P(En)→ 0(q assumed large)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 6 / 20
![Page 29: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/29.jpg)
Main Results
k: Num. of linear measurementsn: Dim. of matrix Xr: Max. rank of matrix Xγ = r
n : Rank-dimension ratio
Critical Quantity
2γ (1− γ/2) n2
= 2rn− r2
Result Statement Consequence
Converse k < (2− ε)γ(1− γ/2)n2 P(En) 9 0P(En)→ 0
Achievability (Uniform) k > (2 + ε)γ(1− γ/2)n2 P(En) ≈ q−n2E(R)
Achievability (Sparse) k > (2 + ε)γ(1− γ/2)n2 P(En)→ 0
Achievability (Noisy) k ' (3 + ε)(γ + σ)n2 P(En)→ 0(q assumed large)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 6 / 20
![Page 30: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/30.jpg)
Main Results
k: Num. of linear measurementsn: Dim. of matrix Xr: Max. rank of matrix Xγ = r
n : Rank-dimension ratio
Critical Quantity
2γ (1− γ/2) n2
= 2rn− r2
Result Statement Consequence
Converse k < (2− ε)γ(1− γ/2)n2 P(En) 9 0P(En)→ 0
Achievability (Uniform) k > (2 + ε)γ(1− γ/2)n2 P(En) ≈ q−n2E(R)
Achievability (Sparse) k > (2 + ε)γ(1− γ/2)n2 P(En)→ 0
Achievability (Noisy) k ' (3 + ε)(γ + σ)n2 P(En)→ 0(q assumed large)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 6 / 20
![Page 31: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/31.jpg)
A necessary condition on number of measurements
Given k measurements ya ∈ Fq and sensing matrices Ha ∈ Fn×nq , we
want a necessary condition for reliable recovery of X.
Proposition (Converse)Assume
X drawn uniformly at random from all matrices in Fn×nq of rank ≤ r
Sensing matrices Ha, a = 1, . . . , k jointly independent of X
r/n→ γ (constant)
If the number of measurements satisfies
k < (2− ε)γ(
1− γ
2
)n2
then P(X 6= X) ≥ ε/2 for all n sufficiently large.
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 7 / 20
![Page 32: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/32.jpg)
A necessary condition on number of measurements
Given k measurements ya ∈ Fq and sensing matrices Ha ∈ Fn×nq , we
want a necessary condition for reliable recovery of X.
Proposition (Converse)Assume
X drawn uniformly at random from all matrices in Fn×nq of rank ≤ r
Sensing matrices Ha, a = 1, . . . , k jointly independent of X
r/n→ γ (constant)
If the number of measurements satisfies
k < (2− ε)γ(
1− γ
2
)n2
then P(X 6= X) ≥ ε/2 for all n sufficiently large.
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 7 / 20
![Page 33: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/33.jpg)
A sensing model: Uniform model
Now we assume that X is non-random
rank(X) ≤ r = γn
!!!"
Each entry of each sensing matrix Ha is i.i.d. and has a uniformdistribution in Fq:
P([Ha]i,j = h) =1q, ∀ h ∈ Fq
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 8 / 20
![Page 34: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/34.jpg)
A sensing model: Uniform model
Now we assume that X is non-random
rank(X) ≤ r = γn
!!!"
Each entry of each sensing matrix Ha is i.i.d. and has a uniformdistribution in Fq:
P([Ha]i,j = h) =1q, ∀ h ∈ Fq
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 8 / 20
![Page 35: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/35.jpg)
A sensing model: Uniform model
Now we assume that X is non-random
rank(X) ≤ r = γn
!!!"
Each entry of each sensing matrix Ha is i.i.d. and has a uniformdistribution in Fq:
P([Ha]i,j = h) =1q, ∀ h ∈ Fq
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 8 / 20
![Page 36: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/36.jpg)
A sensing model: Uniform model
Now we assume that X is non-random
rank(X) ≤ r = γn
!!!"
Each entry of each sensing matrix Ha is i.i.d. and has a uniformdistribution in Fq:
P([Ha]i,j = h) =1q, ∀ h ∈ Fq
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 8 / 20
![Page 37: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/37.jpg)
The min-rank decoder
We employ the min-rank decoder
minimize rank(X)
subject to 〈Ha, X〉 = ya, a = 1, . . . , k
NP-hard, combinatorial
Denote the set of optimizers as S
Define the error event:
En := |S| > 1 ∪ (|S| = 1 ∩ X∗ 6= X)
We want the solution to be unique and correct
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 9 / 20
![Page 38: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/38.jpg)
The min-rank decoder
We employ the min-rank decoder
minimize rank(X)
subject to 〈Ha, X〉 = ya, a = 1, . . . , k
NP-hard, combinatorial
Denote the set of optimizers as S
Define the error event:
En := |S| > 1 ∪ (|S| = 1 ∩ X∗ 6= X)
We want the solution to be unique and correct
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 9 / 20
![Page 39: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/39.jpg)
The min-rank decoder
We employ the min-rank decoder
minimize rank(X)
subject to 〈Ha, X〉 = ya, a = 1, . . . , k
NP-hard, combinatorial
Denote the set of optimizers as S
Define the error event:
En := |S| > 1 ∪ (|S| = 1 ∩ X∗ 6= X)
We want the solution to be unique and correct
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 9 / 20
![Page 40: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/40.jpg)
The min-rank decoder
We employ the min-rank decoder
minimize rank(X)
subject to 〈Ha, X〉 = ya, a = 1, . . . , k
NP-hard, combinatorial
Denote the set of optimizers as S
Define the error event:
En := |S| > 1 ∪ (|S| = 1 ∩ X∗ 6= X)
We want the solution to be unique and correct
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 9 / 20
![Page 41: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/41.jpg)
The min-rank decoder
We employ the min-rank decoder
minimize rank(X)
subject to 〈Ha, X〉 = ya, a = 1, . . . , k
NP-hard, combinatorial
Denote the set of optimizers as S
Define the error event:
En := |S| > 1 ∪ (|S| = 1 ∩ X∗ 6= X)
We want the solution to be unique and correct
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 9 / 20
![Page 42: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/42.jpg)
Achievability under uniform model
Proposition (Achievability under uniform model)Assume
Sensing matrices Ha drawn uniformly
Min-rank decoder is used
r/n→ γ (constant)
If the number of measurements satisfies
k > (2 + ε)γ(
1− γ
2
)n2
then P(En)→ 0.
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 10 / 20
![Page 43: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/43.jpg)
Proof Sketch
En =⋃
Z6=X:rank(Z)≤rank(X)
〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k
By the union bound, the error probability can be bounded as
P(En) ≤∑
Z6=X:rank(Z)≤rank(X)
P(〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k)
At the same time, by uniformity and independence,
P(〈Z,Ha〉 = 〈X,Ha〉,∀a = 1, . . . , k) = q−k
That the number of n× n matrices over Fq of rank ≤ r is boundedabove by
4q2γ(1−γ/2)n2
Remark: Can be extended to noisy case
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 11 / 20
![Page 44: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/44.jpg)
Proof Sketch
En =⋃
Z6=X:rank(Z)≤rank(X)
〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k
By the union bound, the error probability can be bounded as
P(En) ≤∑
Z6=X:rank(Z)≤rank(X)
P(〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k)
At the same time, by uniformity and independence,
P(〈Z,Ha〉 = 〈X,Ha〉,∀a = 1, . . . , k) = q−k
That the number of n× n matrices over Fq of rank ≤ r is boundedabove by
4q2γ(1−γ/2)n2
Remark: Can be extended to noisy case
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 11 / 20
![Page 45: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/45.jpg)
Proof Sketch
En =⋃
Z6=X:rank(Z)≤rank(X)
〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k
By the union bound, the error probability can be bounded as
P(En) ≤∑
Z6=X:rank(Z)≤rank(X)
P(〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k)
At the same time, by uniformity and independence,
P(〈Z,Ha〉 = 〈X,Ha〉,∀a = 1, . . . , k) = q−k
That the number of n× n matrices over Fq of rank ≤ r is boundedabove by
4q2γ(1−γ/2)n2
Remark: Can be extended to noisy case
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 11 / 20
![Page 46: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/46.jpg)
Proof Sketch
En =⋃
Z6=X:rank(Z)≤rank(X)
〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k
By the union bound, the error probability can be bounded as
P(En) ≤∑
Z6=X:rank(Z)≤rank(X)
P(〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k)
At the same time, by uniformity and independence,
P(〈Z,Ha〉 = 〈X,Ha〉,∀a = 1, . . . , k) = q−k
That the number of n× n matrices over Fq of rank ≤ r is boundedabove by
4q2γ(1−γ/2)n2
Remark: Can be extended to noisy case
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 11 / 20
![Page 47: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/47.jpg)
Proof Sketch
En =⋃
Z6=X:rank(Z)≤rank(X)
〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k
By the union bound, the error probability can be bounded as
P(En) ≤∑
Z6=X:rank(Z)≤rank(X)
P(〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k)
At the same time, by uniformity and independence,
P(〈Z,Ha〉 = 〈X,Ha〉,∀a = 1, . . . , k) = q−k
That the number of n× n matrices over Fq of rank ≤ r is boundedabove by
4q2γ(1−γ/2)n2
Remark: Can be extended to noisy case
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 11 / 20
![Page 48: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/48.jpg)
The Reliability Function
DefinitionThe rate of a sequence of linear measurement models is defined as
R := limn→∞
1− kn2 , k = # measurements
Analogy to coding: The rate of the code
C = C : 〈C,Ha〉 = 0, a = 1, . . . , k
is lower bounded by 1− k/n2.
DefinitionThe reliability function of the min-rank decoder is defined as
E(R) := limn→∞
− 1n2 logq P(En)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 12 / 20
![Page 49: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/49.jpg)
The Reliability Function
DefinitionThe rate of a sequence of linear measurement models is defined as
R := limn→∞
1− kn2 , k = # measurements
Analogy to coding: The rate of the code
C = C : 〈C,Ha〉 = 0, a = 1, . . . , k
is lower bounded by 1− k/n2.
DefinitionThe reliability function of the min-rank decoder is defined as
E(R) := limn→∞
− 1n2 logq P(En)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 12 / 20
![Page 50: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/50.jpg)
The Reliability Function
DefinitionThe rate of a sequence of linear measurement models is defined as
R := limn→∞
1− kn2 , k = # measurements
Analogy to coding: The rate of the code
C = C : 〈C,Ha〉 = 0, a = 1, . . . , k
is lower bounded by 1− k/n2.
DefinitionThe reliability function of the min-rank decoder is defined as
E(R) := limn→∞
− 1n2 logq P(En)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 12 / 20
![Page 51: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/51.jpg)
The Reliability Function
Our achievability proof gives a lower bound on E(R)
Upper bound on E(R) is more tricky. But...
Proposition (Reliability Function of Min-Rank Decoder)Assume
Sensing matrices Ha drawn uniformly
Min-rank decoder is used
r/n→ γ (constant)
Then,E(R) =
∣∣∣(1− R)− 2γ(
1− γ
2
)∣∣∣+Note |x|+ := maxx, 0.
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 13 / 20
![Page 52: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/52.jpg)
The Reliability Function
Our achievability proof gives a lower bound on E(R)
Upper bound on E(R) is more tricky. But...
Proposition (Reliability Function of Min-Rank Decoder)Assume
Sensing matrices Ha drawn uniformly
Min-rank decoder is used
r/n→ γ (constant)
Then,E(R) =
∣∣∣(1− R)− 2γ(
1− γ
2
)∣∣∣+Note |x|+ := maxx, 0.
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 13 / 20
![Page 53: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/53.jpg)
The Reliability Function
Our achievability proof gives a lower bound on E(R)
Upper bound on E(R) is more tricky. But...
Proposition (Reliability Function of Min-Rank Decoder)Assume
Sensing matrices Ha drawn uniformly
Min-rank decoder is used
r/n→ γ (constant)
Then,E(R) =
∣∣∣(1− R)− 2γ(
1− γ
2
)∣∣∣+Note |x|+ := maxx, 0.
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 13 / 20
![Page 54: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/54.jpg)
Interpretation
E(R) ≈∣∣∣∣ kn2 − 2γ
(1− γ
2
)∣∣∣∣+
The more the ratio kn2 exceeds 2γ
(1− γ
2
)The larger E(R)
The faster P(En) decays
Linear relationship for the min-rank decoder
de Caen’s lower bound: Let B1, . . . ,BM be events:
P
(M⋃
m=1
Bm
)≥
M∑m=1
P(Bm)2∑Mm′=1 P(Bm ∩ Bm′)
.
Exploit pairwise independence to make statements about errorexponents (linear codes achieve capacity in symmetric DMCs)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 14 / 20
![Page 55: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/55.jpg)
Interpretation
E(R) ≈∣∣∣∣ kn2 − 2γ
(1− γ
2
)∣∣∣∣+The more the ratio k
n2 exceeds 2γ(1− γ
2
)
The larger E(R)
The faster P(En) decays
Linear relationship for the min-rank decoder
de Caen’s lower bound: Let B1, . . . ,BM be events:
P
(M⋃
m=1
Bm
)≥
M∑m=1
P(Bm)2∑Mm′=1 P(Bm ∩ Bm′)
.
Exploit pairwise independence to make statements about errorexponents (linear codes achieve capacity in symmetric DMCs)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 14 / 20
![Page 56: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/56.jpg)
Interpretation
E(R) ≈∣∣∣∣ kn2 − 2γ
(1− γ
2
)∣∣∣∣+The more the ratio k
n2 exceeds 2γ(1− γ
2
)The larger E(R)
The faster P(En) decays
Linear relationship for the min-rank decoder
de Caen’s lower bound: Let B1, . . . ,BM be events:
P
(M⋃
m=1
Bm
)≥
M∑m=1
P(Bm)2∑Mm′=1 P(Bm ∩ Bm′)
.
Exploit pairwise independence to make statements about errorexponents (linear codes achieve capacity in symmetric DMCs)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 14 / 20
![Page 57: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/57.jpg)
Interpretation
E(R) ≈∣∣∣∣ kn2 − 2γ
(1− γ
2
)∣∣∣∣+The more the ratio k
n2 exceeds 2γ(1− γ
2
)The larger E(R)
The faster P(En) decays
Linear relationship for the min-rank decoder
de Caen’s lower bound: Let B1, . . . ,BM be events:
P
(M⋃
m=1
Bm
)≥
M∑m=1
P(Bm)2∑Mm′=1 P(Bm ∩ Bm′)
.
Exploit pairwise independence to make statements about errorexponents (linear codes achieve capacity in symmetric DMCs)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 14 / 20
![Page 58: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/58.jpg)
Interpretation
E(R) ≈∣∣∣∣ kn2 − 2γ
(1− γ
2
)∣∣∣∣+The more the ratio k
n2 exceeds 2γ(1− γ
2
)The larger E(R)
The faster P(En) decays
Linear relationship for the min-rank decoder
de Caen’s lower bound: Let B1, . . . ,BM be events:
P
(M⋃
m=1
Bm
)≥
M∑m=1
P(Bm)2∑Mm′=1 P(Bm ∩ Bm′)
.
Exploit pairwise independence to make statements about errorexponents (linear codes achieve capacity in symmetric DMCs)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 14 / 20
![Page 59: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/59.jpg)
Interpretation
E(R) ≈∣∣∣∣ kn2 − 2γ
(1− γ
2
)∣∣∣∣+The more the ratio k
n2 exceeds 2γ(1− γ
2
)The larger E(R)
The faster P(En) decays
Linear relationship for the min-rank decoder
de Caen’s lower bound: Let B1, . . . ,BM be events:
P
(M⋃
m=1
Bm
)≥
M∑m=1
P(Bm)2∑Mm′=1 P(Bm ∩ Bm′)
.
Exploit pairwise independence to make statements about errorexponents (linear codes achieve capacity in symmetric DMCs)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 14 / 20
![Page 60: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/60.jpg)
Interpretation
E(R) ≈∣∣∣∣ kn2 − 2γ
(1− γ
2
)∣∣∣∣+The more the ratio k
n2 exceeds 2γ(1− γ
2
)The larger E(R)
The faster P(En) decays
Linear relationship for the min-rank decoder
de Caen’s lower bound: Let B1, . . . ,BM be events:
P
(M⋃
m=1
Bm
)≥
M∑m=1
P(Bm)2∑Mm′=1 P(Bm ∩ Bm′)
.
Exploit pairwise independence to make statements about errorexponents (linear codes achieve capacity in symmetric DMCs)
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 14 / 20
![Page 61: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/61.jpg)
Sparse sensing model
Assume as usual that X is non-random
rank(X) ≤ r = γn
Sensing matrices are sparse
!!!"
Arithmetic still performed in Fq
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 15 / 20
![Page 62: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/62.jpg)
Sparse sensing model
Assume as usual that X is non-random
rank(X) ≤ r = γn
Sensing matrices are sparse
!!!"
Arithmetic still performed in Fq
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 15 / 20
![Page 63: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/63.jpg)
Sparse sensing model
Assume as usual that X is non-random
rank(X) ≤ r = γn
Sensing matrices are sparse
!!!"
Arithmetic still performed in Fq
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 15 / 20
![Page 64: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/64.jpg)
Sparse sensing model
Assume as usual that X is non-random
rank(X) ≤ r = γn
Sensing matrices are sparse
!!!"
Arithmetic still performed in Fq
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 15 / 20
![Page 65: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/65.jpg)
Sparse sensing model
Each entry of each sensing matrix Ha is i.i.d. and has a δ-sparsedistribution in Fq:
1−δ
δ/(q−1) δ/(q−1) δ/(q−1) δ/(q−1)
0
P([Ha]ij = h) =
1− δ h = 0δ
q−1 h 6= 0
Fewer adds and multiplies since Ha sparse⇒ Encoding cheaper
May help in decoding via message-passing algorithms
How fast can δ, the sparsity factor, decay with n for reliablerecovery?
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 16 / 20
![Page 66: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/66.jpg)
Sparse sensing model
Each entry of each sensing matrix Ha is i.i.d. and has a δ-sparsedistribution in Fq:
1−δ
δ/(q−1) δ/(q−1) δ/(q−1) δ/(q−1)
0
P([Ha]ij = h) =
1− δ h = 0δ
q−1 h 6= 0
Fewer adds and multiplies since Ha sparse⇒ Encoding cheaper
May help in decoding via message-passing algorithms
How fast can δ, the sparsity factor, decay with n for reliablerecovery?
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 16 / 20
![Page 67: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/67.jpg)
Sparse sensing model
Each entry of each sensing matrix Ha is i.i.d. and has a δ-sparsedistribution in Fq:
1−δ
δ/(q−1) δ/(q−1) δ/(q−1) δ/(q−1)
0
P([Ha]ij = h) =
1− δ h = 0δ
q−1 h 6= 0
Fewer adds and multiplies since Ha sparse⇒ Encoding cheaper
May help in decoding via message-passing algorithms
How fast can δ, the sparsity factor, decay with n for reliablerecovery?
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 16 / 20
![Page 68: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/68.jpg)
Sparse sensing model
Each entry of each sensing matrix Ha is i.i.d. and has a δ-sparsedistribution in Fq:
1−δ
δ/(q−1) δ/(q−1) δ/(q−1) δ/(q−1)
0
P([Ha]ij = h) =
1− δ h = 0δ
q−1 h 6= 0
Fewer adds and multiplies since Ha sparse⇒ Encoding cheaper
May help in decoding via message-passing algorithms
How fast can δ, the sparsity factor, decay with n for reliablerecovery?
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 16 / 20
![Page 69: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/69.jpg)
Sparse sensing model
Problem becomes more challenging
X is not sensed “as much”
Measurements yk do not contain as much information about X
The equality
P(〈Z,Ha〉 = 〈X,Ha〉,∀a = 1, . . . , k) = q−k
no longer holds
Nevertheless....
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 17 / 20
![Page 70: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/70.jpg)
Sparse sensing model
Problem becomes more challenging
X is not sensed “as much”
Measurements yk do not contain as much information about X
The equality
P(〈Z,Ha〉 = 〈X,Ha〉,∀a = 1, . . . , k) = q−k
no longer holds
Nevertheless....
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 17 / 20
![Page 71: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/71.jpg)
Sparse sensing model
Problem becomes more challenging
X is not sensed “as much”
Measurements yk do not contain as much information about X
The equality
P(〈Z,Ha〉 = 〈X,Ha〉,∀a = 1, . . . , k) = q−k
no longer holds
Nevertheless....
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 17 / 20
![Page 72: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/72.jpg)
Sparse sensing model
Problem becomes more challenging
X is not sensed “as much”
Measurements yk do not contain as much information about X
The equality
P(〈Z,Ha〉 = 〈X,Ha〉,∀a = 1, . . . , k) = q−k
no longer holds
Nevertheless....
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 17 / 20
![Page 73: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/73.jpg)
Sparse sensing model
Problem becomes more challenging
X is not sensed “as much”
Measurements yk do not contain as much information about X
The equality
P(〈Z,Ha〉 = 〈X,Ha〉,∀a = 1, . . . , k) = q−k
no longer holds
Nevertheless....
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 17 / 20
![Page 74: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/74.jpg)
Achievability under sparse model
Theorem (Achievability under sparse model)Assume
Sensing matrices Ha drawn according to δ-sparse distribution
Min-rank decoder is used
r/n→ γ (constant)
If the sparsity factor δ = δn satisfies
δ ∈ Ω
(log n
n
)∩ o(1)
and the number of measurements satisfies
k > (2 + ε)γ(
1− γ
2
)n2
then P(En)→ 0.
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 18 / 20
![Page 75: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/75.jpg)
Achievability under sparse model
Theorem (Achievability under sparse model)Assume
Sensing matrices Ha drawn according to δ-sparse distribution
Min-rank decoder is used
r/n→ γ (constant)
If the sparsity factor δ = δn satisfies
δ ∈ Ω
(log n
n
)∩ o(1)
and the number of measurements satisfies
k > (2 + ε)γ(
1− γ
2
)n2
then P(En)→ 0.
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 18 / 20
![Page 76: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/76.jpg)
Achievability under sparse model
Theorem (Achievability under sparse model)Assume
Sensing matrices Ha drawn according to δ-sparse distribution
Min-rank decoder is used
r/n→ γ (constant)
If the sparsity factor δ = δn satisfies
δ ∈ Ω
(log n
n
)∩ o(1)
and the number of measurements satisfies
k > (2 + ε)γ(
1− γ
2
)n2
then P(En)→ 0.
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 18 / 20
![Page 77: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/77.jpg)
Proof Strategy
Approximate with uniform model
P(En) ≤∑
Z6=X:rank(Z)≤rank(X)‖Z−X‖0≤βn2
P(〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k)
+∑
Z6=X:rank(Z)≤rank(X)‖Z−X‖0>βn2
P(〈Z,Ha〉 = 〈X,Ha〉,∀a = 1, . . . , k)
First termFew termsLow Hamming weight – can afford loose bound on probability
Second termMany termsTight bound on probability (circular convolution of δ-sparse pmf)
Set β = Θ( δlog n) to be the optimal “split”
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 19 / 20
![Page 78: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/78.jpg)
Proof Strategy
Approximate with uniform model
P(En) ≤∑
Z6=X:rank(Z)≤rank(X)
P(〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k)
P(En) ≤∑
Z6=X:rank(Z)≤rank(X)‖Z−X‖0≤βn2
P(〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k)
+∑
Z6=X:rank(Z)≤rank(X)‖Z−X‖0>βn2
P(〈Z,Ha〉 = 〈X,Ha〉,∀a = 1, . . . , k)
First termFew termsLow Hamming weight – can afford loose bound on probability
Second termMany termsTight bound on probability (circular convolution of δ-sparse pmf)
Set β = Θ( δlog n) to be the optimal “split”
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 19 / 20
![Page 79: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/79.jpg)
Proof Strategy
Approximate with uniform model
P(En) ≤∑
Z6=X:rank(Z)≤rank(X)‖Z−X‖0≤βn2
P(〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k)
+∑
Z6=X:rank(Z)≤rank(X)‖Z−X‖0>βn2
P(〈Z,Ha〉 = 〈X,Ha〉,∀a = 1, . . . , k)
First termFew termsLow Hamming weight – can afford loose bound on probability
Second termMany termsTight bound on probability (circular convolution of δ-sparse pmf)
Set β = Θ( δlog n) to be the optimal “split”
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 19 / 20
![Page 80: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/80.jpg)
Proof Strategy
Approximate with uniform model
P(En) ≤∑
Z6=X:rank(Z)≤rank(X)‖Z−X‖0≤βn2
P(〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k)
+∑
Z6=X:rank(Z)≤rank(X)‖Z−X‖0>βn2
P(〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k)
First termFew termsLow Hamming weight – can afford loose bound on probability
Second termMany termsTight bound on probability (circular convolution of δ-sparse pmf)
Set β = Θ( δlog n) to be the optimal “split”
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 19 / 20
![Page 81: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/81.jpg)
Proof Strategy
Approximate with uniform model
P(En) ≤∑
Z6=X:rank(Z)≤rank(X)‖Z−X‖0≤βn2
P(〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k)
+∑
Z6=X:rank(Z)≤rank(X)‖Z−X‖0>βn2
P(〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k)
First termFew termsLow Hamming weight – can afford loose bound on probability
Second termMany termsTight bound on probability (circular convolution of δ-sparse pmf)
Set β = Θ( δlog n) to be the optimal “split”
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 19 / 20
![Page 82: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/82.jpg)
Proof Strategy
Approximate with uniform model
P(En) ≤∑
Z6=X:rank(Z)≤rank(X)‖Z−X‖0≤βn2
P(〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k)
+∑
Z6=X:rank(Z)≤rank(X)‖Z−X‖0>βn2
P(〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k)
First termFew termsLow Hamming weight – can afford loose bound on probability
Second termMany termsTight bound on probability (circular convolution of δ-sparse pmf)
Set β = Θ( δlog n) to be the optimal “split”
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 19 / 20
![Page 83: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/83.jpg)
Proof Strategy
Approximate with uniform model
P(En) ≤∑
Z6=X:rank(Z)≤rank(X)‖Z−X‖0≤βn2
P(〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k)
+∑
Z6=X:rank(Z)≤rank(X)‖Z−X‖0>βn2
P(〈Z,Ha〉 = 〈X,Ha〉, ∀a = 1, . . . , k)
First termFew termsLow Hamming weight – can afford loose bound on probability
Second termMany termsTight bound on probability (circular convolution of δ-sparse pmf)
Set β = Θ( δlog n) to be the optimal “split”
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 19 / 20
![Page 84: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/84.jpg)
Concluding remarks
Derived necessary and sufficient conditions for rank minimizationover finite fields
Analyzed the sparse sensing model
Minimum rank distance properties of rank-metric codes[analogous to Barg and Forney (2002)] were derived
Drew analogies between number of measurements and minimumdistance properties
Open Problem: Can the sparsity level of
Ω
(log n
n
)be improved (reduced) further?
http://arxiv.org/abs/1104.4302
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 20 / 20
![Page 85: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/85.jpg)
Concluding remarks
Derived necessary and sufficient conditions for rank minimizationover finite fields
Analyzed the sparse sensing model
Minimum rank distance properties of rank-metric codes[analogous to Barg and Forney (2002)] were derived
Drew analogies between number of measurements and minimumdistance properties
Open Problem: Can the sparsity level of
Ω
(log n
n
)be improved (reduced) further?
http://arxiv.org/abs/1104.4302
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 20 / 20
![Page 86: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/86.jpg)
Concluding remarks
Derived necessary and sufficient conditions for rank minimizationover finite fields
Analyzed the sparse sensing model
Minimum rank distance properties of rank-metric codes[analogous to Barg and Forney (2002)] were derived
Drew analogies between number of measurements and minimumdistance properties
Open Problem: Can the sparsity level of
Ω
(log n
n
)be improved (reduced) further?
http://arxiv.org/abs/1104.4302
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 20 / 20
![Page 87: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/87.jpg)
Concluding remarks
Derived necessary and sufficient conditions for rank minimizationover finite fields
Analyzed the sparse sensing model
Minimum rank distance properties of rank-metric codes[analogous to Barg and Forney (2002)] were derived
Drew analogies between number of measurements and minimumdistance properties
Open Problem: Can the sparsity level of
Ω
(log n
n
)be improved (reduced) further?
http://arxiv.org/abs/1104.4302
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 20 / 20
![Page 88: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/88.jpg)
Concluding remarks
Derived necessary and sufficient conditions for rank minimizationover finite fields
Analyzed the sparse sensing model
Minimum rank distance properties of rank-metric codes[analogous to Barg and Forney (2002)] were derived
Drew analogies between number of measurements and minimumdistance properties
Open Problem: Can the sparsity level of
Ω
(log n
n
)be improved (reduced) further?
http://arxiv.org/abs/1104.4302
Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 20 / 20
![Page 89: Rank Minimization over Finite Fields - ECE@NUS · Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering,](https://reader031.vdocuments.mx/reader031/viewer/2022022519/5b16de667f8b9a596d8e7147/html5/thumbnails/89.jpg)
Concluding remarks
Derived necessary and sufficient conditions for rank minimizationover finite fields
Analyzed the sparse sensing model
Minimum rank distance properties of rank-metric codes[analogous to Barg and Forney (2002)] were derived
Drew analogies between number of measurements and minimumdistance properties
Open Problem: Can the sparsity level of
Ω
(log n
n
)be improved (reduced) further?
http://arxiv.org/abs/1104.4302Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT 2011 20 / 20