Description

Some Results on Codes for Flash Memory. Michael Mitzenmacher Includes work with Hilary Finucane, Zhenming Liu, Flavio Chierichetti. Flash Memory. Now becoming the standard for many products and devices. Even flash hard drives becoming a standard.

Transcripts

A few Results on Codes for Flash Memory Michael Mitzenmacher Includes work with Hilary Finucane, Zhenming Liu, Flavio Chierichetti

Flash Memory Now turning into the standard for some items and gadgets. Indeed, even blaze hard drives turning into a standard. In any case, streak memory works uniquely in contrast to customary recollections. New, intriguing inquiries… .

Basics of Flash Data composed into cells Can "compose" at the cell level Cells contain electrons Can ADD electrons at the cell level Typical extents are 2-4 conceivable states, yet may increment: 256 sometime in the not so distant future? Cells sorted out into pieces Can just ERASE at the square level Blocks can be thousands/a huge number of cells

The Problem with Erasures Erasing a square is costly as far as time ; unravel by preemptive moves of information. Regarding wear . Restricted life cycles infer minimizing square deletion a critical objective.

Basics of Flash Reading and "one-way" composing = including electrons is simple. Composing general qualities is hard. What ought to our information representation look like in such a setting? 0 2 3 1 2 3 1 0 2 3 1 0 2 1

Big Underlying Question How ought to blaze change our basic calculations, information structures, information representation? Memory structure, pecking order has enormous effect on execution. Algorithmists ought to mind! Here concentrating on fundamental question of information representation .

Some History Write-once recollections (WOMs) Introduced by Rivest and Shamir, mid 1980\'s. Punch cards, optical circles. Can swing 0\'s to 1\'s, however not back once more. Address: what number punch card bits do you have to speak to t revises of a k - bit esteem? Beginning stage for this sort of investigation. Preferable plans over the guileless kt bits.

Floating Codes Data representation for blaze memory. State is a n - ary arrangement of q - ary numbers. Speaks to square of n cells; every cell holds an electric charge, q states. State mapped to variable qualities. Gives k - ary grouping of l - ary numbers. State changes by expanding at least one cell values, or reset whole piece. Resets are expensive!!!!

Floating Codes: The Problem As factor qualities change, require state to track factors. How would we pick the mapping capacity from states to factors AND the move work from variable changes to state changes to amplify the time between reset operations? These codes don\'t right blunders. Just information representation. Mistakes a different issue.

Formal Model General Codes We more often than not consider restricted variety; one variable changes for every progression.

Example Track k = 4 bits (so l = 2) with n = 8 cells having q = 4 states D 3 2 0 3 0 3 1 0 1 0 Change bit 3 R D 3 2 0 3 1 3 1 0 Change bit 2 R D 3 2 3 0 3 1 3 1 0 Change bit 1 R D 3 2 0 3 1 3 1 0 1 0 Change bit 1 R D 1 0 1 0 1 0

History Floating codes presented by Jiang, Bohossian, Bruck (ISIT 2007) as model for Flash Memory. Intended to expand most pessimistic scenario time between resets. New multidimensional blaze codes recommended by Yaakobi, Vardy, Siegel, Wolf in Allerton 2008. Normal case concentrated on by Finucane, Liu, Mitzenmacher in Allerton 2008.

Contribution 1: New Worst-Case Codes Hilary Finucane\'s senior proposal. Comparative codes likewise discovered at the same time by Yaakobi et al. Straightforward development, best known execution. Tracks k bits of information, for even k . Execution measured by lack . Max conceivable redesigns is n ( q - 1). Insufficiency is littlest t with the end goal that n ( q - 1)- t overhauls constantly conceivable.

Mod-Based Codes Break obstruct into gatherings of k cells. Every gathering will speak to 1 bit. Furthermore, at most one dynamic gathering for each piece. Equality of gathering decides estimation of bit. Increment a cell by 1 every time the bit changes. How would we know which bit for every gathering? Begin with j th cell inside a gathering to speak to bit j . As cells fill go right, moving back to first cell at end. Either last purge cell is j - 1, or just non-full cell is j - 1; in any case, can make sense of which bit. Most extreme insufficiency: k 2 q . Free of n !

Examples Track k = 8 bits with cells having q = 4 states 0 3 0 Bit 5 is 1 0 3 2 0 Bit 5 is 0 3 2 0 Bit 1 is 0 3 1 3 Bit 4 is 0 Empty piece, disregard 3 Full square, overlook

Further Improvements Can enhance fundamental development by being more watchful as accessible cells get little. Can demonstrate O( kq (log 2 k )(log q k )) inadequacy. Utilize littler squares of cells, however unequivocally compose which bit it stores, when number of cells gets little.

Contribution 2: Average Case Argument: Worst-case time between resets is wrong plan basis. Numerous resets in a lifetime. Mass-created item. Potential to model client conduct. Factual execution ensures more suitable. Expected time between resets. Time with high likelihood. Given a model.

Specific Contributions Problem definition/show Codes for basic cases

Formal Model : Average Case Above : when Cost is 0 when R moves to cell state above past, 1 generally. Presumption : factors changes given by Markov chain. Illustration : i th bit changes with prob. p i Given D , R , gives Markov chain on cell states. Give a chance to be balance on cell states. Objective is to minimize normal cost: Same as amplify normal time between resets.

Variations Many conceivable varieties Multiple factors change per step More broad irregular procedures for qualities Rules constraining moves General costs, enhancements Hardness comes about? Guess a few varieties NP-hard or more awful.

Building Block Code : n = 2, k = 2, l = 2 bit values. 2 cells. Code in view of striped Gray code . Expected time/time with high likelihood before reset = 2 q - o( q ) Asymptotically ideal for all p , 0 < p < 1. Most pessimistic scenario ideal: approx 3 q/2. D (0,0) = 00 D (1,3) = 11 R ((1,0),2,1) = (2,0)

Proof Sketch "Even cells": down with likelihood p , right with likelihood 1-p . "Odd cells" : right with likelihood p , down with likelihood 1-p . Code embraces the corner to corner. Right/down moves roughly adjust for initial 2 q - o( q ) steps.

A Slightly Better Code Changing the last corner enhances things.

Performance Results

Codes for k = l = 2 Break into Gray code squares bigger n . Every piece strolls along corner to corner of its own Gray code square. At the keep going square, carries on like n = 2, k = 2, l = 2 Expected lack O(sqrt( q )).

Example Bit 1 changes recorded from the left … . Meet some place in the center, contingent upon rates … . Bit 2 changes recorded from the privilege

Random Codes Average-case investigation takes a gander at irregular information Natural additionally to take a gander indiscriminately codes (Shannon-style contentions) We consider arbitrary codes in the setting of general moves. All k bits can change at the same time Give a few bits of knowledge into what might be conceivable. Brings about paper.

Conclusions New inquiries emerging from glimmer memory. Step by step instructions to store information to boost lifetimes. The most effective method to code to manage mistakes. The most effective method to streamline calculations and information structures. Step by step instructions to enhance memory chains of importance and variable-sort memory frameworks. Unavoidable issue: is this a center science distinct advantage? What amount would it be advisable for us to be reconsidering?