Posts

How does a flash drive/pen drive store data?

Image
Computer cannot understand analog signal, analog data refers to physical data like voice, radio signal, etc. We humans understand analog data but computers only understand digital data. To understand simple  digital data , it has only  two states as 0 or 1. Basically it corresponds to either  ON state  ie. 1 or  OFF state  ie. 0. Computers can’t be switched off entirely and hence OFF stage is represented by a  low voltage (0)  and ON stage is represented by a  high voltage (1). Now humans have devised various techniques to interpret the data into digital form, for example one of the various forms is ASCII (American Standard Code for Information Interchange). In ASCII, Capital " A " is stored as seven binary numbers:  1000001  while small “ a ” is stored as  1100001 Question mark “ ? ” is stored as  0111111 Number “ 7 ” as  0110111 left bracket “ [ ” as  01011011 ASCII is a code which is agreed upon by every computer round the globe. So if a computer sends

Flash Memory and Flash Drive

Flash memory   is an  electronic  ( solid-state )  non-volatile   computer storage  medium that can be electrically erased and reprogrammed. Toshiba  developed flash memory from  EEPROM  (electrically erasable programmable read-only memory) in the early 1980s and introduced it to the market in 1984. The two main types of flash memory are named after the  NAND  and  NOR   logic gates . The individual flash memory cells exhibit internal characteristics similar to those of the corresponding gates. Where  EPROMs  had to be completely erased before being rewritten, NAND-type flash memory may be written and read in blocks (or pages) which are generally much smaller than the entire device. NOR-type flash allows a single  machine word  (byte) to be written – to an erased location – or read independently. The NAND type operates primarily in  memory cards ,  USB flash drives ,  solid-state drives  (those produced in 2009 or later), and similar products, for general storage and transfer o

Discrete Structures Important in Computer Science

As a Computer Scientist looking to get a Master's degree with focus on "Algorithms, Complexity and Computability Theory and Programming Languages" I would say Discrete Mathematics is very important. Discrete math will help you with the "Algorithms, Complexity and Computability Theory" part of the focus more than programming language. The understanding of set theory, probability, and combinations will allow you to analyze algorithms. You will be able to successfully identify parameters and limitations of your algorithms and have the ability to realize how complex a problem/solution is. As far as the programming language, discrete math doesn't touch on how to actually program; but rather it can be used for software system design specification. I used "ZED" in university, and it was dealing with designing a system using set theory. I'm not sure what percentage of software systems are designed with set theory these days though. The last imp

The Difference Between Bandwidth and Speed

Image
A link in a network is determined by two factors, bandwidth and speed. These are usually the same but not always. Definition: Speed  is bit rate of the circuit while  bandwidth  is the amount of “speed” available for use. As an example, a 500 Megabit Ethernet MPLS service which uses a 1 Gigabit Ethernet connection to site would have a bandwidth of 500Mbps and a speed of 1 Gbps. Speed is commonly determined by the physical signalling of the underlying network. The most common example is a Link Aggregation where a number of Ethernet connection are bonded into a single interface. The bandwidth is the sum of the total connections but the speed is determined by the physical network connection. Another common example occurs when provisioning WAN circuits. It is common to use a high speed circuit to connect from the customer site to the carrier network but offer a “sub-rate” speed for actual use. For example, a network using 10Gbps everywhere including to your sites might only

Difference between kilobyte, megabyte and gigabyte

Image
The difference between a kilobyte, which is KB, a megabyte which is MB, a gigabyte which is GB and a terabyte which is TB is size and nothing more. World bytes table: – 1 Bit = Binary Digit; – 8 Bits = 1 Byte; – 1000 Bytes = 1 Kilobyte; – 1000 Kilobytes = 1 Megabyte; – 1000 Megabytes = 1 Gigabyte; – 1000 Gigabytes = 1 Terabyte; – 1000 Terabytes = 1 Petabyte; – 1000 Petabytes = 1 Exabyte; – 1000 Exabytes = 1 Zettabyte; – 1000 Zettabyte = 1 Yottabyte; – 1000 Yottabyte = 1 Brontobyte. Basically, 1024 MB equals to 1 GB and 1 MB equals to 1024 KB. It means 1024 * 1024 equals to 1 GB. As the latest there is also available 1 TB which has the 1024 GB capacity. See useful video about difference between a KB, MB, and GB: Once upon a time, you could tell if there were using the 1024 or 1000 based on the case of the letters. Generally, anything in print usually refers to 1000, and anything on the computer refers to 1024.

Is 1 GB equal to 1024 MB or 1000 MB?

Image
A Byte is equal to 8 Bits. A Kilobyte is actually 1,024 Bytes depending on which definition is used. A Megabyte is approximately 1000 Kilobytes. A megabyte is a unit of information or computer storage equal to 1,048,576 bytes. A Gigabyte is approximately 1000 Megabytes. A gigabyte is a unit of information or computer storage meaning approximately 1.07 billion bytes. But 1 gigabyte = 1024 megabytes and this still be correct using the other acceptable standards. World bytes table: - 1 Bit = Binary Digit; - 8 Bits = 1 Byte; - 1000 Bytes = 1 Kilobyte; - 1000 Kilobytes = 1 Megabyte; - 1000 Megabytes = 1 Gigabyte; - 1000 Gigabytes = 1 Terabyte; - 1000 Terabytes = 1 Petabyte; - 1000 Petabytes = 1 Exabyte; - 1000 Exabytes = 1 Zettabyte; - 1000 Zettabyte = 1 Yottabyte; - 1000 Yottabyte = 1 Brontobyte. Traditionally, one gigabyte has been defined as 1023*3 bytes or 1,073,741,824 bytes or 2*30 bytes. This is the definition commonly used for computer memory and file