[Ubuntu] Ignore Line Ending Differences in the Meld Tool

March 9, 2011

Recently I upgraded my Ubuntu VM to 10.04.02 amd64. The Meld tool, version 1.3.0, deals with the line ending differences in file view, while the version 1.1.5.1 (in my previous 08.04 32-bit Ubuntu VM) doesn’t show such differences.

Usually I need to compare files with the same name but different line endings (One is checked out from version control of Linux and the other is from Windows). I’d like to make CRLF and LF compare equal.

Solution:

Edit -> Preferences -> Text Filters -> New a patten with a name something like “line ending” and regex “\r+$” and check the Active check box. Refresh or restart the Meld.

[Intel 64 and IA32] Reading Notes to Volume 1: Basic Architecture – Chapter 4 Data Types

March 4, 2011

1. <4.1.1> However, to improve the performance of programs, data structures (especially stacks) should be aligned on natural boundaries when-ever possible. The reason for this is that the processor requires two memory accesses to make an unaligned memory access; aligned accesses require only one memory access. Why?

Answer: <http://en.wikipedia.org/wiki/Data_structure_alignment&gt; 2. Problems:

A computer accesses memory by a single memory word at a time. As long as the memory word size is at least as large as the largest primitive data type supported by the computer, aligned accesses will always access a single memory word. This may not be true for misaligned data accesses.

If the highest and lowest bytes in a datum are not within the same memory word the computer must split the datum access into multiple memory accesses. This requires a lot of complex circuitry to generate the memory accesses and coordinate them. To handle the case where the memory words are in different memory pages the processor must either verify that both pages are present before executing the instruction or be able to handle a TLB miss or a page fault on any memory access during the instruction execution.

When a single memory word is accessed the operation is atomic, i.e. the whole memory word is read or written at once and other devices must wait until the read or write operation completes before they can access it. This may not be true for unaligned accesses to multiple memory words, e.g. the first word might be read by one device, both words written by another device and then the second word read by the first device so that the value read is neither the original value nor the updated value. Although such failures are rare, they can be very difficult to identify.

2. <4.2.1.2> two’s complement.

Answer: <http://en.wikipedia.org/wiki/Two’s_complement&gt;

The two’s complement of a binary number is defined as the value obtained by subtracting the number from a large power of two (specifically, from 2N for an N-bit two’s complement).

In finding the two’s complement of a binary number, the bits are inverted, or “flipped”, by using the bitwise NOT operation; the value of 1 is then added to the resulting value. Bit overflow is ignored, which is the normal case with the zero value.

Note that the two’s complement of zero is zero: inverting gives all ones, and adding one changes the ones back to zeros (the overflow is ignored). Also the two’s complement of the most negative number representable (e.g. a one as the most-significant bit and all other bits zero) is itself.

A more formal definition of a two’s-complement negative number (denoted by N* in this example) is derived from the equation N * = 2nN, where N is the corresponding positive number and n is the number of bits in the representation.

3.<4.3> Near pointer and far pointer.

A near pointer is a 32-bit (or 16-bit) offset (also called an effective address) within a segment. Near pointers are used for all memory references in a flat memory model or for references in a segmented model where the identity of the segment being accessed is implied.

A far pointer is a logical address, consisting of a 16-bit segment selector and a 32-bit (or 16-bit) offset. Far pointers are used for memory references in a segmented memory model where the identity of a segment being accessed must be specified explicitly.

4.<4.7>BCD.

Binary-coded decimal integers (BCD integers) are unsigned 4-bit integers with valid values ranging from 0 to 9. IA-32 architecture defines operations on BCD integers located in one or more general-purpose registers or in one or more x87 FPU registers.

When operating on BCD integers in x87 FPU data registers, BCD values are packed in an 80-bit format and referred to as decimal integers. In this format, the first 9 bytes hold 18 BCD digits, 2 digits per byte. The most significant bit of byte 10 contains the sign bit (0 = positive and 1= negative; bits 0 through 6 of byte 10 are don’t care bits). Negative decimal integers are not stored in two’s complement form; they are distinguished from positive decimal integers only by the sign bit.

Please see Table 4-4.  Packed Decimal Integer Encodings

[Linux] Black Screen of Ubuntu-10.04.2-desktop-amd64 Virtual Machine (VMware Player 3.1.3) on Win7

March 3, 2011

Today I installed a Ubuntu-10.04.2-desktop-amd64 VM on my Win7 host with the WMware Player tool. The installation is quite simple with the cool “Easy Install” feature, which will install Ubuntu and VMware tools in almost unattended mode.

The first time it brings me in the text mode, I entered my user name and password, then “startx” to enter the graphic mode. Before I do any operation, it goes into a black screen and do not respond to any input then. I closed the VM forcibly. While the next time I start it, the black screen comes again. I googled some pages including “delete the virtual disks of Daemon Tools” and “Disable Accelerate 3D Graphics under display settings”, which all are no help.

Finally I removed the VM from the list, and choose “Open a Virtual Machine”. It shows me that my VM is in “Suspend” state. OK, Got it!

Start the VM, click “Ctrl Alt Delete” on the black screen. It wakes again.

Actually I should use “Ctrl Alt Insert” in the VM since “Ctrl Alt Delete” would be received by both the VM and the host.

[FDE] Good Pages for Pointersec

February 17, 2011

http://www.blackfistsecurity.com/2009/01/pointsec-for-pc-master-boot-record.html

[Android] Start to Work with Android 2.3

December 23, 2010

1. Create Ubuntu 10.04 amd64 OS and setup the environment for Android2.3.

Please follow the latest “Get Android Source Code” page (http://source.android.com/source/download.html).

Note: The space before “lucid partner” in below line cannot be ignored:

sudo add-apt-repository “deb http://archive.canonical.com/ lucid partner”

2. Update SDK to Android 2.3 (eclipse-java-galileo-SR2-win32)

1). Help -> Check for Updates. Update ADT and other tools to 8.0.1

2). Window -> Android SDK and AVD Manager. Get below packages:

Android SDK Tools, revision 8

Android SDK Platform-tools, revision 1

Documentation for Android ADK, API9, revision 1

SDK Platform Android 2.3, API 9, revision 1

Samples for SDK API 9, revision 1

NOTE: In my first try of Android-2.3_r1, the build can be worked around on a Ubuntu 08.04 32-bit OS with below issue fix:

“Only 64-bit build environments are supported beyond froyo/2.2.” issue:

http://groups.google.com/group/android-platform/browse_thread/thread/b0bb991131589363

A summary for the solution (Tested on Ubuntu 8.04 32-bit):

1). Transfer to JDK 1.6

$ update-java-alternatives -l

java-1.5.0-sun 53 /usr/lib/jvm/java-1.5.0-sun
java-6-sun 63 /usr/lib/jvm/java-6-sun

/* If  java-6-sun is not listed in the output: */

$ sudo apt-get install sun-java6-jdk

/* Set the system to use the right version of java by default: */

$ sudo update-java-alternatives -s java-6-sun

2) Modify ./build/core/main.mk, comment below line and insert a new one as follows:

# ifneq (64,$(findstring 64,$(build_arch)))
ifneq (i686,$(findstring i686,$(build_arch)))

[NOTE]: Please run below command to get the right architecture that you are using:

$ uname -m

i686

3). Comment lines as follows:

#    LOCAL_CFLAGS += -m64
#    LOCAL_LDFLAGS += -m64

in files:
./external/clearsilver/cgi/Android.mk
./external/clearsilver/java-jni/Android.mk
./external/clearsilver/util/Android.mk
./external/clearsilver/cs/Android.mk

or change 64 to 32.

When I synced to Android-2.3.3_r1, a new issue, “GLIBC_2.11 not found”, arose. To break away from such weird things, I finally decide to upgrade to Ubuntu 10.04 amd64 as Google suggested. That takes less time than I expected. And the build runs smoothly without above issues.

[UEFI] The _EFI_INT_SIZE_OF(n) Macro

December 10, 2010

You can find below macro from the EfiStdArg.h file (in EDK):

#define _EFI_INT_SIZE_OF(n) ((sizeof (n) + sizeof (UINTN) – 1) &~(sizeof (UINTN) – 1))

The macro rounds a non-negative integer up to he nearest multiple of sizeof(UINTN). For example, sizeof(short) is 2, while _EFI_INT_SIZE_OF(short) would be 4 on a 32 bit UEFI BIOS.

Refer to the Division Algorithm topic on Wikipedia.

The generalized division algorithm.

For every integer numbers a and b ( not equal to 0), there exist a unique integer q and a unique non negative integer number r such that

a = bq + r, 0 <= r < |b|.

q and r are called integer quotient and b-bounded remainder of the division of a by b, respectively.

q = [a/b], r = a – [a/b]b, where[a/b]is the largest integer less than or equal to a/b.

To our requirement, rounding a up to the nearest multiple of b, bq is what we want when r = 0, while b(q+1) is what we want when r > 0. We can express the requirement as follows:

a = bq + r’, -b < r’ <= 0.

Transfer above to the generalized division algorithm:

a + b – 1 = bq + (r’ + b – 1), 0 <= r’ + b – 1 < b.

Now q = [(a + b – 1)/b], So bq = [(a + b – 1)/b]b.

If b is a k-th power of two, division can be implemented by right shift of k bits, and multiplication by left shift of k bits. The same effect can be achieved by zero clearing the lowest k bits.

bq = (a + b – 1) & ~(b-1)

I should be grateful to the post at http://bbs.chinaunix.net/viewthread.php?tid=814501, though it is a Chinese explanation to the macro.

[MSDN – Tools] Predefined C/C++ Types

December 9, 2010

As we know, size_t is a library type. If we want to use this type in our projects, a header file (stddef.h for C and cstddef for C++, and other headers) must be included. But hold on,  you may find the header file in not a must when using Microsoft Visual C/C++ compiler. Every C and C++ source file compiled with Microsoft Visual C++ has, in effect, a forced inclusion of a header file written by Microsoft to provide some predefined types. size_t is one of those predefined C/C++ types.

The C++ compiler pre-defines size_t no matter what:

#if Wp64

typedef __w64 unsigned int size_t;

#else

typedef unsigned int size_t;

#endif

but the C compiler defines it only conditionally:

#if Wp64

typedef __w64 unsigned int size_t;

#endif

For example, a file named “size_t.c” as follows:

//typedef  unsigned __int64 size_t;

int main()
{
size_t i = 0;
i++;
return i;
}

With a Microsoft 32-bit C/C++ Optimizing Compiler:

> cl size_t.c

size_t.c(3) : error C2065: ‘size_t’ : undeclared identifier

An error is reported since the Microsoft C compiler only defines size_t when building for 64-bit targets.

> cl /Wp64 size_t.c

cl : Command line warning D9035 : option ‘Wp64’ has been deprecated and will be
removed in a future release

You get a warning, but can pass the compilation successfully because size_t has been defined for the /Wp64 option.

Note: Please look up MSDN for the meaning of the warning D9035.

> cl /Tp size_t.c

Compiled without any warning and error. The Microsoft C++ compiler always predefine the size_t type.

If you uncomment  the first line of the size_t.c file, then build it as C++ file again:

> cl /Tp size_t.c

size_t.c(1) : error C2371: ‘size_t’ : redefinition; different basic types
predefined C++ types (compiler internal)(19) : see declaration of ‘size_t’

An error occurred because of the incompatibility between the predefined size_t (unsigned int for 32-bit) and the global one (unsigned __int64).

If you use a Microsoft C/C++ Optimizing Compiler for 64-bit target (x64 or Itanium). The Wp64 is implicitly defined. So size_t would be always defined regardless of the inclusion of the header files.

 

Unfortunately, MSDN has not mentioned any details about the predefined C/C++ types. The only web page related to this topic that I can find is: http://members.ozemail.com.au/~geoffch/samples/programming/msvc/language/predefined/index.html

I would appreciate if anyone could provide more details about this.

 

[MSDN- Tools] /GL(Whole Program Optimization) and /LTCG(Link-time Code Generation)

December 8, 2010

There is a good article for this topic: [Under The Hood: Link-time Code Generation] at http://msdn.microsoft.com/en-us/magazine/cc301698.aspx.

A summary:

Normal: Compiler front end outputs IL -> Compiler front end invokes the back end to generates .OBJ code targeted to the CPU -> Linker takes all the OBJ files, along with any supplied .LIB files, and create an executable image.

LTCG: Compiler front end outputs IL, and emits an OBJ file with IL in it -> Linker calls COM methods in the back end to generate the final, processor-specific code -> Linker generates the executable image.

NOTE: In the normal build process, the OBJ file has COFF format, while in the LTCG case, the OBJ file has IL format which is undocumented and subject to change from version to version of the compiler. So you cannot examine the OBJ files with tools like dumpbin.

 

[Android] About Resources

November 15, 2010

http://developer.android.com/reference/android/content/package-descr.html: This topic includes a terminology list associated with resources, and a series of examples of using resources in code.
For a complete guide on creating and using resources, see the document on http://developer.android.com/guide/topics/resources/index.html
For a reference on the supported Android resource types, see http://developer.android.com/guide/topics/resources/available-resources.html

Accessing Resources in Code :

[<package_name>.]R.<resource_type>.<resource_name>

Accessing Resources from XML:

@[<package_name>:]<resource_type>/<resource_name>

Referencing style attributes:

?[<package_name>:][<resource_type>/]<resource_name>

[Linux – GNU Make] The %.o: %.c Built-in Rule And The CPPFLAGS Variable

November 9, 2010

Here is the build-in rule for updating an object file from its C source:
%.o: %.c
$(COMPILE.c) $(OUTPUT_OPTION) $<
The customization of this rule is controlled entirely by the set of variables it uses. We see two variables here, but COMPILE.c in particular is defined in terms of several other variables:
COMPILE.c = $(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c
OUTPUT_OPTION = -o $@
CPPFLAGS means extra flags to give to the C preprocessor and programs that use it. The extra flags to give to the C++ compiler should be appended to the CXXFLAGS variable. I have ever seen a makefile that mix up the two. The result is that you would see duplicate flags as follows:

Makefile:

INCLUDES = -I. -I../src
CFLAGS = -Wall -Wextra $(INCLUDES) -O1
CPPFLAGS += $(CFLAGS)

foo: foo.o

Command line output:

gcc -Wall -Wextra -I. -I../src -O1 -Wall -Wextra -I. -I../src -O1 -o foo.o foo.c