Representing and manipulating information#

Read chapter 2 of textbook.

1. Everything is bits#

  • Each bit is 0 or 1

  • By encoding/interpreting sets of bits in various ways

    • Computers determine what to do (instructions)

    • … and represent and manipulate numbers, sets, strings, etc…

  • Why bits? Electronic Implementation

    • Easy to store with bistable elements.

    • Reliably transmitted on noisy and inaccurate wires.

electronic representations of bits

2. Encoding byte values#

  • Byte = 8 bits

  • Binary: 0000 0000 to 1111 1111.

  • Decimal: 0 to 255.

  • Hexadecimal: 00 to FF.

    • Base 16 number representation

    • Use character 0 to 9 and A to F.

  • Example: 15213 (decimal) = 0011 1011 0110 1101 (binary) = 3B6D (hex)

Hex

Decimal

Binary

0

0

0000

1

1

0001

2

2

0010

3

3

0011

4

4

0100

5

5

0101

6

6

0110

7

7

0111

8

8

1000

9

9

1001

A

10

1010

B

11

1011

C

12

1100

D

13

1101

E

14

1110

F

15

1111

3. Example data representations#

C data type

typical 32-bit

typical 64-bit

x86_64

char

1

1

1

short

2

2

2

int

4

4

4

long

4

8

8

float

4

4

4

double

8

8

8

pointer

4

8

8

4. Boolean algebra#

  • Developed by George Boole in 19th century

  • Algebraic representation of logic: encode True as 1 and False as 0.

  • Operations: AND (&), OR (|), XOR (^), NOT (~).

A

B

A&B

A|B

A^B

~A

0

0

0

0

0

1

0

1

0

1

1

1

1

0

0

1

1

0

1

1

1

1

0

0

5. General Boolean algebra#

  • Operate on bit vectors

  • Operation applied bitwise.

  • All properties of boolean algebra apply.

bitwise boolean operations

6. Bit-level operations in C#

  • Boolean operations: &, |, ^, ~.

  • Shift operations:

    • Left Shift: x << y

      • Shift bit-vector x left y positions

      • Throw away extra bits on left

      • Fill with 0’s on right

    • Right Shift: x >y

      • Shift bit-vector x right y positions

      • Throw away extra bits on right

      • Logical shift (for unsigned values)

        • Fill with 0’s on left

      • Arithmetic shift (for signed values)

        • Replicate most significant bit on left

    • Undefined Behavior

      • Shift amount < 0 or ≥ word size

  • Apply to any “integral” data type: long, int, short, char, unsigned

  • View arguments as bit vectors.

  • Arguments applied bit-wise.

7. Hands-on: bit-level operations in C#

  • Inside your csc231, create another directory called 03-data and change into this directory.

  • Create a file named bitwise_demo.c with the following contents:

  • Compile and run bitwise_demo.c.

  • Confirm that the binary printouts match the corresponding decimal printouts and the expected bitwise operations.

8. Encoding integers#

encoding integers
  • C does not mandate using 2’s complement.

    • But, most machines do, and we will assume so.

Decimal

Hex

Binary

short int x

15213

3B 6D

00111011 01101101

short int y

-15213

C4 93

11000100 10010011

  • Sign bit

    • For 2’s complement, most significant bit indicates sign.

    • 0 for nonnegative

    • 1 for negative

9. 2’s complement examples#

  • 2’s complement representation depends on the number of bits.

  • Technical trick: A binary representation of the absolute value of negative 2 to the power of the number of bits minus the absolute value of the negative number.

  • Simple example for 5-bit representation

-16

8

4

2

1

10

0

1

0

1

0

8 + 2 = 10

-10

1

0

1

1

0

-16 + 4 + 2 = -10

  • Simple example for 6-bit representation

-32

16

8

4

2

1

10

0

0

1

0

1

0

8 + 2 = 10

-10

1

1

0

1

1

0

-32 + 16 + 4 + 2 = -10

  • Complex example

Decimal

Hex

Binary

short int x

15213

3B 6D

00111011 01101101

short int y

-15213

C4 93

11000100 10010011

Weight

15213

-15213

1

1

1

1

1

2

0

0

1

2

4

1

4

0

0

8

1

8

0

0

16

0

0

1

16

32

1

32

0

0

64

1

64

0

0

128

0

0

1

128

256

1

256

0

0

512

1

512

0

0

1024

0

0

1

1024

2048

1

2048

0

0

4096

1

4096

0

0

8192

1

8192

0

0

16384

0

0

1

16384

-32768

0

0

1

-32768

——

—–

—–

——

——

Sum

15213

-15213

10. Numeric ranges#

  • Unsigned values for w-bit word

    • UMin = 0

    • UMax = 2w</sup- 1

  • 2’s complement values for w-bit word

    • TMin = -2w-1

    • TMax = 2w-1</sup- 1

    • -1: 111..1

  • Values for different word sizes:

8 (1 byte)

16 (2 bytes)

32 (4 bytes)

64 (8 bytes)

UMax

255

65,535

4,294,967,295

18,446,744,073,709,551,615

TMax

127

32,767

2,147,483,647

9,223,372,036,854,775,807

TMin

-128

-32,768

-2,147,483,648

-9,223,372,036,854,775,808

  • Observations

    • abs(TMin) = TMax + 1

      • Asymetric range

    • UMax = 2 * TMax + 1

  • C programming

    • #include <limits.h>

    • Declares constants: ULONG_MAX, LONG_MAX, LONG_MIN

    • Platform specific

11. Challenge#

  • Write a C program called numeric_ranges.c that prints out the value of ULONG_MAX, LONG_MAX, LONG_MIN. Also answer the following question: If we multiply LONG_MIN by -1, what do we get?

  • Note: You need to search for the correct format string specifiers.

Solution

-p allows the creation of all directories on the specified path, regardless whether any directory on that path exists.


 

12. Conversions (casting)#

  • C allows casting between different numeric data types.

  • What should be the effect/impact?

  • Notations:

    • B2T: Binary to 2’s complement

    • B2U: Binary to unsigned

    • U2B: Unsigned to binary

    • U2T: Unsigned to 2’s complement

    • T2B: 2’s complement to binary

    • T2U: 2’s complement to unsigned

13. Visualization of conversions#

2's complement to unsigned
  • T2Uw(x) = x + 2w</supif x < 0

  • T2Uw(x) = x if x >= 0

unsigned to 2's
  • U2Tw(x) = x - 2w</supif x TMaxw

  • U2Tw(x) = x if x <= TMaxw

  • Summary

    • Bit pattern is maintained but reinterpreted

    • Can have unexpected effects: adding or subtracting 2w

    • When expressions contain both signed and unsigned int values, int values will be casted to unsigned.

14. Hands on: casting#

  • Make sure that you are inside 03-data directory.

  • Create a file named casting.c with the following contents:

  • Compile and run casting.c.

  • Confirm that converted values are correct based on equations from slide 13.

15. Challenge#

  • What is wrong with the following program?

  • How can this program be corrected?

Solution
  • Change the range to 11-1

  • Why don’t we change the type of i? that path exists.


{: .solution}
{: .challenge}

16. Expanding and truncation#

  • Expanding (e.g., short int to int)

    • Unsigned: zeros added

    • Signed: sign extension

    • Both yield expected result

expanding
  • Truncating (e.g., unsigned to unsigned short)

    • Unsigned/signed: bits are truncated

    • Result reinterpreted

    • Unsigned: mod operation

    • Signed: similar to mod

    • For small (in magnitude) numbers yields expected behavior

truncating

17. Misunderstanding integers can lead to the end of the world!#

  • Thule Site J: USAF radar station for missile warning and spacecraft tracking.

  • “There are many examples of errors arising from incorrect or incomplete specifications. One such example is a false alert in the early days of the nuclear age, when on October 5, 1960, the warning system at NORAD indicated that the United States was under massive attack by Soviet missiles with a certainty of 99.9 percent. It turned out that the Ballistic Missile Early Warning System (BMEWS) radar in Thule, Greenland, had spotted the rising moon. Nobody had thought about the moon when specifying how the system should act.” (Computer System Reliability and Nuclear War, CACM 1987).

  • Moon’s large size: 1000s of objects reported.

  • Moon’s distance: .25 million miles

    • Thule’s BMEWS max distance: 3000 miles.

    • Truncated distance to the moon: % sizeof(distance) = 2200 miles.

  • Remember assignment 1: The computer does not “see”, it only interprets.

    • Thousands of objects on the sky within missile detection range!.

  • Human control:

    • Kruschev was in New York on October 5, 1960.

    • Someone at Thule said, “why not go check outside?”

18. Unsigned addition#

  • Given w bits operands

  • True sum can have w + 1 bits (carry bit).

  • Carry bit is discarded.

  • Implementation:

  • s = (u + v) mod 2w

19. Hands on: unsigned addition#

  • Make sure that you are inside 03-data directory.

  • Create a file named unsigned_addition.c with the following contents:

  • Compile and run unsigned_addition.c.

  • Confirm that calculated values are correct.

20. 2’s complement addition#

  • Almost similar bit-level behavior as unsigned addition

    • True sum of w-bit operands will have w+1-bit, but

    • Carry bit is discarded.

    • Remainding bits are treated as 2’s complement integers.

  • Overflow behavior is different

  • TAddw(u, v) = u + v + 2w</supif u + v < TMinw</sub(Negative Overflow)

  • TAddw(u, v) = u + v if TMinw</sub<= u + v <= TMaxw

  • TAddw(u, v) = u + v - 2w</supif u + v TMaxw</sub(Positive Overflow)

21. Hands on: signed addition#

  • Make sure that you are inside 03-data directory.

  • Create a file named signed_addition.c with the following contents:

  • Compile and run signed_addition.c.

  • Confirm that calculated values are correct.

22. Multiplication#

  • Compute product of w-bit numbers x and y.

  • Exact results can be bigger than w bits.

    • Unsigned: up to 2w bits: 0 <= x * y <= (2w</sup- 1)2

    • 2’s complement (negative): up to 2w - 1 bits: x * y >= (-2)2w-2</sup+ 22w-1

    • 2’s complement (positive): up to 2w bits: x * y <= 22w-2

  • To maintain exact results:

  • Trust your compiler: Modern CPUs and OSes will most likely know to select the optimal method to multiply.

23. Multiplication and Division by power-of-2#

  • Power-of-2 multiply with shift

    • u >k gives u * 2k

    • True product has w + k bits: discard k bits.

  • Unsigned power-of-2 divide with shift

    • u >k gives floor(u / 2k)

    • Use logical shift.

  • Signed power-of-2 divide with shift

    • x 0: x >k gives floor(u / 2k)

    • x < 0: (x + (1 << k) - 1) >k gives ceiling(u / 2k)

    • C statement: (x < 0 ? x + (1 << k) - 1: x) >k

24. Negation: complement and increase#

  • Negate through complement and increment:

    • ~x + 1 == -x

25. Challenge#

  • Implement a C program called negation.c that implements and validates the equation in slide 24. The program should take in a command line argument that takes in a number of type short to be negated.

  • What happens if you try to negate -32768?

Solution

26. Byte-oriented memory organization#

  • Programs refer to data by address

    • Conceptually, envision it as a very large array of bytes

      • In reality, it’s not, but can think of it that way

    • An address is like an index into that array

      • and, a pointer variable stores an address

  • Note: system provides private address spaces to each “process”

    • Think of a process as a program being executed

    • So, a program can clobber its own data, but not that of others

27. Machine words#

  • Any given computer has a “Word Size”

  • `word size”: Nominal size of integer-valued data and of addresses

  • Until recently, most machines used 32 bits (4 bytes) as word size

    • Limits addresses to 4GB (232 bytes)

  • Increasingly, machines have 64-bit word size

    • Potentially, could have 18 EB (exabytes) of addressable memory

    • That’s 18.4 X 1018

  • Machines still support multiple data formats

    • Fractions or multiples of word size

    • Always integral number of bytes

28. Word-oriented memory organization#

  • Addresses specific byte locations

    • Address of first byte in word.

    • Address of successive words differ by 4 (32-bit) or 8 (64-bit).

word-oriented memory organization

30. Byte ordering in memory#

  • Machine-dependent

  • Big Endian: Sun (Oracle SPARC), PPC Mac, Internet (network data transfer)

    • Least significant byte has the highest address.

  • Little Endian: x86, ARM processors running Android, iOS, and Linux

    • Least significant byte has lowest address.

  • Example

    • Variable x has 4-byte value of 0x01234567

    • Address given by &x is 0x100

byte ordering example

31. Hands on: byte ordering in memory#

  • Make sure that you are inside 03-data directory.

  • Create a file named byte_ordering.c with the following contents:

  • Compile and run byte_ordering.c.

  • Confirm that calculated values are correct.

32. Fractional binary numbers#

  • What is 1011.1012?

    • 8 + 0 + 2 + 1 + 1/2 + 0 + 1/4

  • Can only exactly represent numbers of the form x/2k

  • Limited range of numbers within the w-bit word size.

33. IEEE Foating point#

  • IEEE Standard 754

    • Established in 185 as uniform standard for floating point arithmetic

    • Supported by all major CPUs

    • Some CPUs don’t implement IEEE 754 in full, for example, early GPUs, Cell BE processor

  • Driven by numerical concerns

    • Nice standards for rounding, overflow, underflow

    • Hard to make fast in hardware (Numerical analysts predominated over hardware designers in defining standard).

34. Importance of floating-point arithmetic accuracy#

35. IEEE Foating point representation#

  • Numerical form: (-1)sM2E

    • Sign bit s determins whether the number is negative or positive.

    • Significantd M normalizes a fractional value in range [1.0, 2.0).

    • Exponent E weights value by power of two.

  • Encoding

    • Most significant bit is sign bit s.

    • exp field encodes E (but is not equal to E)

    • frac field encodes M (but is not equalt to M)

floating encoding
  • Single precision: 32 bits

32-bit encoding
  • Double precision: 64 bits

64-bit encoding

36. Three “kinds” of floating point numbers#

  • Depends on the exp field (E).

  • Denormalized: exp contains all 0s.

  • Special: exp contains all 1s.

  • Normalized: exp contains a mix of 0s and 1s.

37. Normalized floating point numbers#

  • When: exp != 000…0 and exp != 111…1

  • Exponent coded as a biased value: E = exp – Bias

    • exp: unsigned value of exp field

    • Bias = 2k-1</sup- 1, where k is number of exponent bits

      • Single precision: 127 (exp: 1…254, E: -126…127)

      • Double precision: 1023 (exp: 1…2046, E: -1022…1023)

  • Significand coded with implied leading 1: M = 1.xxx…x2

    • xxx…x: bits of frac field

    • Minimum when frac=000…0 (M = 1.0)

    • Maximum when frac=111…1 (M = 2.0 – ε)

    • Get extra leading bit for “free” (hence the range: [1.0, 2.0))

38. Example: normalized floating point numbers#

  • Value: float F = 15213.0;

    • 1521310</sub= 111011011011012</sub= 1.11011011011012</sub* 213

  • Significand:

    • M = 1.11011011011012

    • frac = 110110110110100000000002

  • Exponent:

    • E = 13

    • Bias = 127

    • exp = 140 = 100011002

  • Result: 0|10001100|11011011011010000000000

39. Hands on: check floating point representation#

  • Make sure that you are inside 03-data directory.

  • Create a file named show_fp.c with the following contents:

  • Compile and run show_fp.c.

  • Confirm that calculated values in slide 38 are correct.

40. Denormalized floating point#

  • Condition: exp = 000…0

  • Exponent value: E = 1 – Bias

  • Significand coded with implied leading 0: M = 0.xxx…x2

    • xxx…x: bits of frac

  • Cases

    • exp = 000…0, frac = 000…0

      • Represents zero value

      • Note distinct values: +0 and –0

    • exp = 000…0, frac ≠ 000…0

      • Numbers closest to 0.0

      • Equispaced

41. Special cases#

  • Condition: exp = 111…1

  • Case: exp = 111…1, frac = 000…0

    • Represents value infinity

    • Operation that overflows

    • Both positive and negative

  • Case: exp = 111…1, frac != 000…0

    • Not-a-Number (NaN)

    • Represents case when no numeric value can be determined

42. Floating operations: basic idea#

  • Compute exact result.

  • Make it fit into desired precision.

    • Possible overflow if exponent too large

    • Possible round to fit into frac

  • Rounding modes

1.40

1.60

1.50

2.50

-1.50

Towards zero

1

1

1

2

-1

Round down

1

1

1

2

-2

Round up

2

1

1

3

-1

Nearest even (default)

1

2

2

2

-2

  • Nearest even

    • Hard to get any other mode without dropping into assembly.

    • C99 has support for rounding mode management

  • All others are statistically based

    • Sum of set of positive numbers will consistently be over- or under-estimated.

43. Floating point multiplication#

  • (-1)s1M12E1</sup* (-1)s2M22E2.

  • Exact result: (-1)sM2E</sup

    • s = s1^s2

    • M = M1*M2

    • E = E1+E2

  • Correction

    • If M >= 2, shift M right, increment E.

    • If E out of range, overflow.

    • Round M to fit frac precision

  • Implementation: Biggest chore is multiplying significands.

44. Floating point addition#

  • (-1)s1M12E1</sup+ (-1)s2M22E2.

  • Exact result: (-1)sM2E</sup

    • Sign s, significand M: result of signed align and add

    • E = E1

  • Correction

    • If M >= 2, shift M right, increment E.

    • If M < 1, shift M left k positions, decrement E by k.

    • Overflow if E out of range

    • Round M to fit frac precision

  • Implementation: Biggest chore is multiplying significands.

45. Mathematical properties of floating point addition#

  • Compare to those of Abelian group (a group in which the result of applying the group operation to two group elements does not depend on the order in which they are written):

    • Closed under addition? Yes (but may generate infinity or NaN)

    • Communicative? Yes

    • Associative? No

      • Overflow and inexactness of rounding

      • (3.14+1e10)-1e10 = 0, 3.14+(1e10-1e10) = 3.14

    • 0 is additive identity? Yes

    • Every element has additive inverse? Almost

      • Except for infinities and NaN

  • Monotonicity?

    • Almost

    • Except for infinities and NaN

46. Mathematical properties of floating point multiplication#

  • Compare to those of Abelian group (a group in which the result of applying the group operation to two group elements does not depend on the order in which they are written):

    • Closed under addition? Yes (but may generate infinity or NaN)

    • Communicative? Yes

    • Associative? No

      • Overflow and inexactness of rounding

      • (1e20 * 1e20) * 1e-20= inf, 1e20 * (1e20 * 1e-20)= 1e20

    • 1 is additive identity? Yes

    • Multiplication distributes over addition? No

      • Overflow and inexactness of rounding

      • 1e20 * (1e20-1e20)= 0.0, 1e20 * 1e20 – 1e20 * 1e20 = NaN

  • Monotonicity?

    • Almost

    • Except for infinities and NaN

47. Floating point in C#

  • C guarantees two levels

    • float: single precision

    • double: double precision

  • Conversion/casting

    • Casting between int, float, and double changes bit representation

    • double/float to int

      • Truncates fractional part

      • Like rounding toward zero

      • Not defined when out of range or NaN: Generally sets to TMin

    • int to double

      • Exact conversion, as long as int has ≤ 53 bit word size

    • int to float

      • Will round according to rounding mode