You will need php 5.2 or above to successfully build thrift 0.5.0 or you will see the following error:
'zend_std_get_constructor' was not declared in this scope.
Download php 5.2.14 source from here and go through the usual "configure->make->sudo make install" cycle.
Monday, November 22, 2010
Friday, November 19, 2010
Thursday, November 4, 2010
Installing CentOS 4.4 using USB
Ok, I know this is an old version. But if for some reasons, your environment still calls for this kind of system.
1. Download the following from http://vault.centos.org/4.4/isos/i386
CentOS-4.4-i386-bin1of4.iso
CentOS-4.4-i386-bin2of4.iso
CentOS-4.4-i386-bin3of4.iso
CentOS-4.4-i386-bin4of4.iso
Windows
2. Use MagicDisc to mount the the ISOes to different drives. Open the the drives and copy all the files to a directory. Overwrite the files that have the same names.
Linux
2. Mount the iso
# mkdir /mnt/iso
# mount -o loop disk1.iso /mnt/iso
Copy all the files to a directory, e.g. centos44
# cp -r /mnt/iso/* centos44
Do the above for all the iso files.
3. Windows: Use MagicISO to write the files to an ISO; Linux:
# mkisofs -o centos44-i386.iso ./centos44
4. Use UNetbootin (http://unetbootin.sourceforge.net) to write the ISO to your USB drive
5. Boot from USB in your system that is to be installed. Select HTTP installation.
Site: vault.centos.org
Direcotry: 4.4/os/i386
The reason why we have to do this instead of using the CentOS-4.4.ServerCD-i386.iso file is that the
Server ISO does not match with the installation directory structure. The installation will not start.
1. Download the following from http://vault.centos.org/4.4/isos/i386
CentOS-4.4-i386-bin1of4.iso
CentOS-4.4-i386-bin2of4.iso
CentOS-4.4-i386-bin3of4.iso
CentOS-4.4-i386-bin4of4.iso
Windows
2. Use MagicDisc to mount the the ISOes to different drives. Open the the drives and copy all the files to a directory. Overwrite the files that have the same names.
Linux
2. Mount the iso
# mkdir /mnt/iso
# mount -o loop disk1.iso /mnt/iso
Copy all the files to a directory, e.g. centos44
# cp -r /mnt/iso/* centos44
Do the above for all the iso files.
3. Windows: Use MagicISO to write the files to an ISO; Linux:
# mkisofs -o centos44-i386.iso ./centos44
4. Use UNetbootin (http://unetbootin.sourceforge.net) to write the ISO to your USB drive
5. Boot from USB in your system that is to be installed. Select HTTP installation.
Site: vault.centos.org
Direcotry: 4.4/os/i386
The reason why we have to do this instead of using the CentOS-4.4.ServerCD-i386.iso file is that the
Server ISO does not match with the installation directory structure. The installation will not start.
Friday, October 29, 2010
Find out gcc version in a script
user@local$ gcc -v 2>&1 | tail -1 | awk '{print $3}'
4.4.3
Only want the major version number:
gcc -v 2>&1 | tail -1 | awk '{print $3}'|cut -d. -f1
4
4.4.3
Only want the major version number:
gcc -v 2>&1 | tail -1 | awk '{print $3}'|cut -d. -f1
4
Thursday, September 16, 2010
log4j and Scribe
Well I thought it should be fairly easy. But it turned out it's not that straightforward especially for a Java newbie like me. Basically we need to make a log4j appender for Scribe. Fortunately it's available from http://github.com/lenn0x/Scribe-log4j-Appender. The package contains scribelog4j.jar already. Just in case you want to build it yourself or change the referencing libraries, here are the instructions. The following were tested on CentOS 5.3 64-bit.
1. Download and extract log4j from http://logging.apache.org/log4j/1.2/download.html
2. Download and extract Scribe 2.2 and Thrift from http://github.com/facebook/scribe/downloads and http://incubator.apache.org/thrift/download (the current version is 0.5.0, I am using 0.2.0 in this example.)
Build Thrift and install. Please refer to Step 14 in Installing & Running Scribe with HDFS support on CentOS article.
3. Download and extract slf4j (Simple Logging Facade for Java) from http://www.slf4j.org/download.html
4. Install latest ant if you haven't. See the previous post: Installing Latest Ant on CentOS. Thrift jar cannot be built using ant 1.6.5 from original CentOS repository.
5. Build libthrift.jar. From top Thrift directory:
You should see libthrift.jar in the current directory.
6. Build libfb303.jar. From top Thrift directory:
Edit build.xml file.
Find "<classpath>" and change the value of "pathelement location" to where your libthrift.jar is.
My thrift is in $HOME/pkgs/thrift-0.2.0 directory. So I add a new property name on the top.
Also slf4j's api jar has to be added to classpath as well.
The pathelement location lines should look like the following:
Remove FacebookBase.java file. This has to be done or you will get "getStatus() in com.facebook.fb303.FacebookBase cannot implement getStatus() in com.facebook.fb303.FacebookService.Iface; attempting to use incompatible return type" error.
Run ant.
You should find libfb303.jar in build/lib directory
7. Build scribe.jar. From top level scribe directory:
Edit scribe.thrift
Add the following line in the file.
Copy fb303 thrift directory here.
And copy fb303's build.xml here as well
Edit build.xml to build scribe.jar. The file will look like the following. magenta text indicates the modifications for building scribe.jar.
Run ant.
You will see scribe.jar in build/lib directory.
8. Build scribelog4j.jar
First Get Scribe-log4j-Appender using git.
If git is not installed.
Get the code.
The code will be in Scribe-log4j-Appender directory. Change to the directory.
Copy scribe's build.xml here.
Edit build.xml and it should look like the following. Adjust the magenta part if needed.
Run ant.
You should get scribelog4j.jar in the current directory.
9. Write and run a test program
Here is the test program called scribelog4j_test.java
Create a file called log4j.properties in the current directory. It should look like this:
For convenience, copy all the necessary jars to a directory called lib.
To build it:
To run it:
Start the Scribe server in a new terminal.
In the original terminal, run scribelog4j_test program.
You should find the log file in /tmp/scribetest directory.
1. Download and extract log4j from http://logging.apache.org/log4j/1.2/download.html
2. Download and extract Scribe 2.2 and Thrift from http://github.com/facebook/scribe/downloads and http://incubator.apache.org/thrift/download (the current version is 0.5.0, I am using 0.2.0 in this example.)
Build Thrift and install. Please refer to Step 14 in Installing & Running Scribe with HDFS support on CentOS article.
3. Download and extract slf4j (Simple Logging Facade for Java) from http://www.slf4j.org/download.html
4. Install latest ant if you haven't. See the previous post: Installing Latest Ant on CentOS. Thrift jar cannot be built using ant 1.6.5 from original CentOS repository.
5. Build libthrift.jar. From top Thrift directory:
[user@localhost:~/pkgs/thrift-0.2.0] cd lib/java
[user@localhost:~/pkgs/thrift-0.2.0/lib/java] ant
You should see libthrift.jar in the current directory.
6. Build libfb303.jar. From top Thrift directory:
[user@localhost:~/pkgs/thrift-0.2.0] cd contrib/fb303/java
Edit build.xml file.
Find "<classpath>" and change the value of "pathelement location" to where your libthrift.jar is.
My thrift is in $HOME/pkgs/thrift-0.2.0 directory. So I add a new property name on the top.
<property name="pkgs.dir" value="${user.home}/pkgs"/>
Also slf4j's api jar has to be added to classpath as well.
The pathelement location lines should look like the following:
<pathelement location="${pkgs.dir}/thrift-0.2.0/lib/java/libthrift.jar"/>
<pathelement location="${pkgs.dir}/slf4j-1.6.1/slf4j-api-1.6.1.jar"/>
Remove FacebookBase.java file. This has to be done or you will get "getStatus() in com.facebook.fb303.FacebookBase cannot implement getStatus() in com.facebook.fb303.FacebookService.Iface; attempting to use incompatible return type" error.
[user@localhost:~/pkgs/thrift-0.2.0/contrib/fb303/java] rm FacebookBase.java
Run ant.
[user@localhost:~/pkgs/thrift-0.2.0/contrib/fb303/java] ant
You should find libfb303.jar in build/lib directory
[user@localhost:~/pkgs/thrift-0.2.0/contrib/fb303/java] ll build/lib/
total 176
drwxr-xr-x 2 user user 4096 Sep 16 12:09 .
drwxr-xr-x 4 user user 4096 Sep 16 11:45 ..
-rw-r--r-- 1 user user 165162 Sep 16 12:09 libfb303.jar
7. Build scribe.jar. From top level scribe directory:
[user@localhost::~/pkgs/scribe] cd if
Edit scribe.thrift
Add the following line in the file.
namespace java scribe
Copy fb303 thrift directory here.
[user@localhost:~/pkgs/scribe/if] cp -r ~/pkgs/thrift-0.2.0/contrib/fb303/ .
And copy fb303's build.xml here as well
[user@localhost:~/pkgs/scribe/if] cp ~/pkgs/thrift-0.2.0/contrib/fb303/java/build.xml .
Edit build.xml to build scribe.jar. The file will look like the following. magenta text indicates the modifications for building scribe.jar.
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<project name="scribe" default="dist" basedir="..">
<!-- project wide settings. All directories relative to basedir -->
<property name="src.dir" value="if"/>
<property name="if.dir" value="if"/>
<property name="thrift_home" value="/usr/local"/>
<property name="pkgs.dir" value="${user.home}/pkgs"/>
<!-- temp build directories -->
<property name="build.dir" value="${src.dir}/build"/>
<property name="build.classes.dir" value="${build.dir}/classes"/>
<property name="build.lib.dir" value="${build.dir}/lib"/>
<!-- final distribution directories -->
<property name="dist.dir" value="/usr/local"/>
<property name="dist.lib.dir" value="${dist.dir}/lib"/>
<!-- make temporary and distribution directories -->
<target name="prepare">
<mkdir dir="${build.dir}"/>
<mkdir dir="${build.classes.dir}"/>
<mkdir dir="${build.lib.dir}"/>
<mkdir dir="${dist.dir}"/>
<mkdir dir="${dist.lib.dir}"/>
</target>
<!-- generate scribe thrift code -->
<target name="scribebuild">
<echo>generating thrift scribe files</echo>
<exec executable="${thrift_home}/bin/thrift" failonerror="true">
<arg line="--gen java -o ${src.dir} ${src.dir}/../if/scribe.thrift" />
</exec>
<move todir="${src.dir}">
<fileset dir="${src.dir}/gen-java/scribe">
<include name="**/*.java"/>
</fileset>
</move>
</target>
<!-- compile the base and thrift generated code and jar them -->
<target name="dist" depends="prepare,scribebuild">
<echo>Building scribe.jar .... </echo>
<javac srcdir="${src.dir}" destdir="${build.classes.dir}" debug="on">
<classpath>
<pathelement location="${pkgs.dir}/thrift-0.4.0/lib/java/libthrift.jar"/>
<pathelement location="${pkgs.dir}/slf4j-1.6.1/slf4j-api-1.6.1.jar"/>
<pathelement location="${pkgs.dir}/thrift-0.4.0/contrib/fb303/java/build/lib/libfb303.jar"/>
</classpath>
<include name="*.java"/>
<include name="${build.dir}/scribe"/>
</javac>
<jar jarfile="${build.lib.dir}/scribe.jar" basedir="${build.classes.dir}">
</jar>
</target>
<!-- copy the build jar to the distribution library directory -->
<target name="install" depends="dist">
<copy todir="${dist.lib.dir}">
<fileset dir="${build.lib.dir}" includes="scribe.jar"/>
</copy>
</target>
<target name="clean">
<echo>Cleaning old stuff .... </echo>
<delete dir="${build.dir}/classes/com"/>
<delete dir="${build.dir}/lib"/>
<delete dir="${build.dir}"/>
</target>
</project>
Run ant.
[user@localhost:~/pkgs/scribe/if] ant
You will see scribe.jar in build/lib directory.
[user@localhost:~/pkgs/scribe/if] ll build/lib/
total 36
drwxr-xr-x 2 user user 4096 Sep 16 12:55 .
drwxr-xr-x 4 user user 4096 Sep 16 12:55 ..
-rw-r--r-- 1 user user 27002 Sep 16 12:58 scribe.jar
8. Build scribelog4j.jar
First Get Scribe-log4j-Appender using git.
If git is not installed.
yum -y install git
Get the code.
git clone http://github.com/lenn0x/Scribe-log4j-Appender.git
The code will be in Scribe-log4j-Appender directory. Change to the directory.
[user@localhost:~/pkgs/Scribe-log4j-Appender] cd src/java/org/apache/log4j
Copy scribe's build.xml here.
[user@localhost:~/pkgs/Scribe-log4j-Appender/src/java/org/apache/log4j] cp ~/pkgs/scribe/if/build.xml .
Edit build.xml and it should look like the following. Adjust the magenta part if needed.
<project name="scribelog4j" default="dist" basedir="..">
<!-- project wide settings. All directories relative to basedir -->
<property name="src.dir" value="log4j/scribe"/>
<property name="if.dir" value="if"/>
<property name="thrift_home" value="/usr/local"/>
<property name="pkgs.dir" value="${user.home}/pkgs"/>
<!-- temp build directories -->
<property name="build.dir" value="${src.dir}/build"/>
<property name="build.classes.dir" value="${build.dir}/classes"/>
<property name="build.lib.dir" value="${build.dir}/lib"/>
<!-- final distribution directories -->
<property name="dist.dir" value="/usr/local"/>
<property name="dist.lib.dir" value="${dist.dir}/lib"/>
<!-- make temporary and distribution directories -->
<target name="prepare">
<mkdir dir="${build.dir}"/>
<mkdir dir="${build.classes.dir}"/>
<mkdir dir="${build.lib.dir}"/>
<mkdir dir="${dist.dir}"/>
<mkdir dir="${dist.lib.dir}"/>
</target>
<!-- compile the base and thrift generated code and jar them -->
<target name="dist" depends="prepare">
<echo>Building scribelog4j.jar .... </echo>
<javac srcdir="${src.dir}" destdir="${build.classes.dir}" debug="on">
<classpath>
<pathelement location="${pkgs.dir}/thrift-0.2.0/lib/java/libthrift.jar"/>
<pathelement location="${pkgs.dir}/slf4j-1.6.1/slf4j-api-1.6.1.jar"/>
<pathelement location="${pkgs.dir}/thrift-0.2.0/contrib/fb303/java/build/lib/libfb303.jar"/>
<pathelement location="${pkgs.dir}/scribe/if/build/lib/scribe.jar"/>
</classpath>
<include name="*.java"/>
<include name="${build.dir}/scribe"/>
</javac>
<jar jarfile="${build.lib.dir}/scribelog4j.jar" basedir="${build.classes.dir}">
</jar>
</target>
<!-- copy the build jar to the distribution library directory -->
<target name="install" depends="dist">
<copy todir="${dist.lib.dir}">
<fileset dir="${build.lib.dir}" includes="scribelog4j.jar"/>
</copy>
</target>
<target name="clean">
<echo>Cleaning old stuff .... </echo>
<delete dir="${build.dir}/classes/com"/>
<delete dir="${build.dir}/lib"/>
<delete dir="${build.dir}"/>
</target>
</project>
Run ant.
[user@localhost:~/pkgs/Scribe-log4j-Appender/src/java/org/apache/log4j] ant
You should get scribelog4j.jar in the current directory.
9. Write and run a test program
Here is the test program called scribelog4j_test.java
public class scribelog4j_test {
private static Logger logger = Logger.getLogger(scribelog4j_test.class);
public static void main(String[] args) {
long time = System.currentTimeMillis();
logger.info("main method called..");
logger.info("another informative message");
logger.warn("This one is a warning!");
logger.log(Level.TRACE,
"And a trace message using log() method.");
long logTime = System.currentTimeMillis() - time;
logger.debug("Time taken to log the previous messages: "
+ logTime + " msecs");
// Exception logging example:
try{
String subs = "hello".substring(6);
}catch (Exception e){
logger.error("Error in main() method:", e);
}
}
}
Create a file called log4j.properties in the current directory. It should look like this:
log4j.rootLogger=debug, stdout, R, scribe
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
# Pattern to output the caller's file name and line number.
log4j.appender.stdout.layout.ConversionPattern=%5p [%t] (%F:%L) - %m%n
log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=example.log
log4j.appender.R.MaxFileSize=100KB
# Keep one backup file
log4j.appender.R.MaxBackupIndex=1
log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%p %t %c - %m%n
#
# Add this to your log4j.properties
#
log4j.appender.scribe=org.apache.log4j.scribe.ScribeAppender
log4j.appender.scribe.scribe_category=MyScribeCategoryName
log4j.appender.scribe.scribe_host=127.0.0.1
log4j.appender.scribe.layout=org.apache.log4j.PatternLayout
log4j.appender.scribe.layout.ConversionPattern=%5p [%t] %d{ISO8601} %F (line %L) %m%n
For convenience, copy all the necessary jars to a directory called lib.
[user@localhost:~/proj/scribe_log4j] ls lib
libfb303.jar log4j-1.2.16.jar scribelog4j.jar slf4j-log4j12-1.6.1.jar
libthrift.jar scribe.jar slf4j-api-1.6.1.jar
To build it:
/usr/bin/javac -classpath .:lib/scribelog4j.jar:lib/log4j-1.2.16.jar ./scribelog4j_test.java
To run it:
Start the Scribe server in a new terminal.
[user@localhost:~/scribe/src] scribed ../examples/example1.conf
In the original terminal, run scribelog4j_test program.
[user@localhost:~/proj/scribe_log4j] /usr/bin/java -classpath .:lib/log4j-1.2.16.jar:lib/scribelog4j.jar::lib/libthrift.jar:lib/slf4j-api-1.6.1.jar:lib/slf4j-log4j12-1.6.1.jar:lib/scribe.jar:lib/libfb303.jar scribelog4j_test
You should find the log file in /tmp/scribetest directory.
Wednesday, September 15, 2010
Installing Latest Ant on CentOS
To install the latest version of ant (ant-1.7.1-7.jpp5.noarch) from JPackage instead of the version provided by yum (ant-1.6.5-2jpp.2.x86_64). Here are the steps on CentOS 5.3 64-bit system.
1. Download and install JPackage rpm (jpackage-release-5-4.jpp5.noarch.rpm) from JPackage site. This will add jpackage.repo to your /etc/yum.repos.d directory.
2. Use yum to search "ant.", you will see ant.noarch provided by JPackage repo in the list. But right now if you try to install it, the following error would occur.
Check which package is providing rebuild-security-providers.
jpackage-utils has been installed in the system.
Looks like this RedHat version of jpackage-utils doesn't work with this latest version of ant from JPackage.
Solution:
a. Uninstall jpackage-utils without uninstalling all the dependencies.
[user@localhost] sudo rpm -e --nodeps jpackage-utils-1.7.3-1jpp.2.el5
b. Install jpackage-utils (jpackage-utils-5.0.0-2.jpp5.noarch.rpm) from JPackage.
[user@localhost] sudo rpm -Uvh jpackage-utils-5.0.0-2.jpp5.noarch.rpm
c. Install ant.noarch
[user@localhost] sudo yum install ant.noarch
1. Download and install JPackage rpm (jpackage-release-5-4.jpp5.noarch.rpm) from JPackage site. This will add jpackage.repo to your /etc/yum.repos.d directory.
2. Use yum to search "ant.", you will see ant.noarch provided by JPackage repo in the list. But right now if you try to install it, the following error would occur.
Error: Missing Dependency: /usr/bin/rebuild-security-providers is needed by package java-1.4.2-gcj-compat
Check which package is providing rebuild-security-providers.
rpm -q --whatprovides /usr/bin/rebuild-security-providers jpackage-utils-1.7.3-1jpp.2.el5
jpackage-utils has been installed in the system.
Looks like this RedHat version of jpackage-utils doesn't work with this latest version of ant from JPackage.
Solution:
a. Uninstall jpackage-utils without uninstalling all the dependencies.
[user@localhost] sudo rpm -e --nodeps jpackage-utils-1.7.3-1jpp.2.el5
b. Install jpackage-utils (jpackage-utils-5.0.0-2.jpp5.noarch.rpm) from JPackage.
[user@localhost] sudo rpm -Uvh jpackage-utils-5.0.0-2.jpp5.noarch.rpm
c. Install ant.noarch
[user@localhost] sudo yum install ant.noarch
Tuesday, August 31, 2010
Installing & Running Scribe with HDFS support on CentOS
Have being working on Scribe for our logging for a while. Finally feel like writing something about it. The main trigger came from a request to support HDFS. Since I always like to tackle open source project's incompatibilities on different environments, I made this challenge the highest priority (although my boss probably doesn't think so...)
Installing Scribe on CentOS 5.3 64-bit
1. Java SE Development Kit (JDK) 6 latest update - http://www.oracle.com/technetwork/java/javase/downloads/index.html. I used update 20. The java directory is: /usr/java/jdk1.6.0_20.
2. ruby-1.8.5-5.el5_4.8 + ruby-devel-1.8.5-5.el5_4.8 (using yum)
3. python-2.4.3-24.el5 + python-devel-2.4.3-24.el5.x86_64 (using yum)
4. libevent + libevent-devel - libevent-1.4.13-1/libevent-devel-1.4.13-1 (using yum)
5. gcc-c++-4.1.2-46.el5_4.2
6. boost 1.40 - http://downloads.sourceforge.net/project/boost/boost/1.40.0/boost_1_40_0.tar.gz?use_mirror=softlayer
7. flex-2.5.4a-41.fc6 (using yum)
8. m4-1.4.15 - ftp.gnu.org/gnu/m4 (do not use the version from yum)
9. imake-1.0.2-3.x86_64 (using yum)
10. autoconf-2.65 - ftp.gnu.org/gnu/autoconf (do not use the version from yum)
11. automake-1.11.1 - ftp.gnu.org/gnu/automake (do not use the version from yum)
12. libtool-2.2.6b - ftp.gnu.org/gnu/libtool (do not use the version from yum)
13. bison-2.3-2.1 (using yum). It actually needs yacc. The following is to make yacc script that actually calls bison.
14. Latest version: http://incubator.apache.org/thrift/download
thrift-0.2.0 - http://archive.apache.org/dist/incubator/thrift/0.2.0-incubating
thrift-0.4.0 - http://archive.apache.org/dist/incubator/thrift/0.4.0-incubating
I am using thrift-0.2.0 in this example.
[user@localhost] ./bootstrap.sh
[user@localhost] ./configure
If you see this:
Copy pkg.m4 in /usr/share/aclocal to thrift's aclocal directory. From the top-level thrift directory, do the following:
Then again:
You may see the following error when building thrift 0.4.0 and 0.5.0.
make[4]: Entering directory `/home/user/pkgs/thrift-0.4.0/lib/cpp'
/bin/sh ../../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I. -I../.. -I/usr/local/include -I./src -Wall -g -O2 -MT ThreadManager.lo -MD -MP -MF .deps/ThreadManager.Tpo -c -o ThreadManager.lo `test -f 'src/concurrency/ThreadManager.cpp' || echo './'`src/concurrency/ThreadManager.cpp
libtool: compile: g++ -DHAVE_CONFIG_H -I. -I../.. -I/usr/local/include -I./src -Wall -g -O2 -MT ThreadManager.lo -MD -MP -MF .deps/ThreadManager.Tpo -c src/concurrency/ThreadManager.cpp -fPIC -DPIC -o .libs/ThreadManager.o In file included from src/concurrency/ThreadManager.cpp:20:
src/concurrency/ThreadManager.h:24:26: tr1/functional: No such file or directoryIn file included from src/concurrency/ThreadManager.cpp:20:
Please change line 24 of ThreadManager.h from
to
We also need to compile and install the Facebook fb303 library. From the top-level thrift directory:
15. hadoop 0.21.0 - http://www.apache.org/dyn/closer.cgi/hadoop/core/
16. scribe-2.2 - http://github.com/downloads/facebook/scribe/scribe-2.2.tar.gz - have to use scribe-2.1 or above to support HDFS.
17. Configure and Run Hadoop (single-node cluster in this tutorial).
a. First we need to modify a few configuration files. From top-level hadoop directory, edit conf/hadoop-env.sh, conf/core-site.xml, conf/hdfs-site.xml and conf/macoded-site.xml files.
b. Format the namenode. From top-level hadoop directory
c. Start hadoop
d. Use jps to check if all the processes are started.
e. Use netstat to check if port 9000 (set in core-site.xml) is listening.
f. Open a browser and type server_ip:50070 to check if it shows 1 Live Nodes in the Cluster Summary. Be patient, sometimes it takes some time (30 seconds to 1 minute) to show. Remember to put the cursor to the address bar and codess enter to refresh the page. For some reason, F5 (Reload) doesn't work for me.
18. Configure and Run Scribe
a. Set up the java class paths for Scribe. I have installed my hadoop 0.21.0 in ~/pkgs/hadoop-0.21.0 directory
[user@localhost] export CLASSPATH=~/pkgs/hadoop-0.21.0/hadoop-hdfs-0.21.0.jar:~/pkgs/hadoop-0.21.0/lib/commons-logging-1.1.1.jar:~/pkgs/hadoop-0.21.0/hadoop-common-0.21.0.jar
b. Edit a Scribe configuration file to use HDFS. Change to scribe src directory
c. Run Scribe
If it cannot run and complains the following:
Do the following:
If you see the following output when running scribe:
It may be due to Port 1463 not being available. Run "netstat -nap | grep 1463" to find out which program is using it.
19. Send something to Scribe to log in HDFS
From a different terminal, in top-level Scribe directory:
In the terminal that runs Scribe server, you should see the following output:
20. Check if the message has been logged to HDFS:
Stop Scribe first (either stop Scribe or make the file rotate, otherwise Hadoop won't write it to the filesystem.)
Get the directory out to take a look:
One note about the Secondary store in Scribe configuration file: Scribe opens the files for both Primary and Secondary stores even in the normal situation as long as replay_buffer is true (default). Then it will try to delete the secondary store file when Primary store is handling the messages. This is causing problem because HDFS has not completed its access to the secondary store file. The following exception will happen:
There is more information about this error in "NameNode Logs" link on http://server_ip:50070/dfshealth.jsp page.
To avoid this problem we should either set replay_buffer to false or make the seconardy store local instead of HDFS (the above configuration file example, scribe_hdfs.conf).
The following configuration is to set replay_buffer to false and both primary and secondary stores to use HDFS.
References:
1. Thomas Dudziak's blog: How to install Scribe with HDFS support on Ubuntu Karmic
2. Agile Testing: Compiling, installing and test-running Scribe
3. Google Scribe server group: Failover when writing to HDFS problems
Installing Scribe on CentOS 5.3 64-bit
1. Java SE Development Kit (JDK) 6 latest update - http://www.oracle.com/technetwork/java/javase/downloads/index.html. I used update 20. The java directory is: /usr/java/jdk1.6.0_20.
2. ruby-1.8.5-5.el5_4.8 + ruby-devel-1.8.5-5.el5_4.8 (using yum)
3. python-2.4.3-24.el5 + python-devel-2.4.3-24.el5.x86_64 (using yum)
4. libevent + libevent-devel - libevent-1.4.13-1/libevent-devel-1.4.13-1 (using yum)
5. gcc-c++-4.1.2-46.el5_4.2
6. boost 1.40 - http://downloads.sourceforge.net/project/boost/boost/1.40.0/boost_1_40_0.tar.gz?use_mirror=softlayer
[user@localhost] ./bootstrap.sh
[user@localhost] ./bjam
[user@localhost] sudo su -
[root@localhost] ./bjam install
7. flex-2.5.4a-41.fc6 (using yum)
8. m4-1.4.15 - ftp.gnu.org/gnu/m4 (do not use the version from yum)
9. imake-1.0.2-3.x86_64 (using yum)
10. autoconf-2.65 - ftp.gnu.org/gnu/autoconf (do not use the version from yum)
11. automake-1.11.1 - ftp.gnu.org/gnu/automake (do not use the version from yum)
12. libtool-2.2.6b - ftp.gnu.org/gnu/libtool (do not use the version from yum)
13. bison-2.3-2.1 (using yum). It actually needs yacc. The following is to make yacc script that actually calls bison.
[root@localhost] more /usr/bin/yacc
#!/bin/sh
exec bison -y "$@"
[root@localhost] chmod +x /usr/bin/yacc
14. Latest version: http://incubator.apache.org/thrift/download
thrift-0.2.0 - http://archive.apache.org/dist/incubator/thrift/0.2.0-incubating
thrift-0.4.0 - http://archive.apache.org/dist/incubator/thrift/0.4.0-incubating
I am using thrift-0.2.0 in this example.
[user@localhost] ./bootstrap.sh
[user@localhost] ./configure
If you see this:
error: ./configure: line 21183: syntax error near unexpected token `MONO,'
Copy pkg.m4 in /usr/share/aclocal to thrift's aclocal directory. From the top-level thrift directory, do the following:
cp /usr/share/aclocal/pkg.m4 aclocal
Then again:
[user@localhost] ./bootstrap.sh
[user@localhost] ./configure
[user@localhost] make
[user@localhost] sudo su -
[root@localhost] make install
[root@localhost] exit
You may see the following error when building thrift 0.4.0 and 0.5.0.
make[4]: Entering directory `/home/user/pkgs/thrift-0.4.0/lib/cpp'
/bin/sh ../../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I. -I../.. -I/usr/local/include -I./src -Wall -g -O2 -MT ThreadManager.lo -MD -MP -MF .deps/ThreadManager.Tpo -c -o ThreadManager.lo `test -f 'src/concurrency/ThreadManager.cpp' || echo './'`src/concurrency/ThreadManager.cpp
libtool: compile: g++ -DHAVE_CONFIG_H -I. -I../.. -I/usr/local/include -I./src -Wall -g -O2 -MT ThreadManager.lo -MD -MP -MF .deps/ThreadManager.Tpo -c src/concurrency/ThreadManager.cpp -fPIC -DPIC -o .libs/ThreadManager.o In file included from src/concurrency/ThreadManager.cpp:20:
src/concurrency/ThreadManager.h:24:26: tr1/functional: No such file or directoryIn file included from src/concurrency/ThreadManager.cpp:20:
Please change line 24 of ThreadManager.h from
#include <tr1/functional>
to
#include <boost/tr1/tr1/functional>
We also need to compile and install the Facebook fb303 library. From the top-level thrift directory:
[user@localhost] cd contrib/fb303
[user@localhost] ./bootstrap.sh
[user@localhost] ./configure
[user@localhost] make
[user@localhost] sudo su -
[root@localhost] make install
[root@localhost] exit
15. hadoop 0.21.0 - http://www.apache.org/dyn/closer.cgi/hadoop/core/
[user@localhost] cd hadoop-0.21.0/hdfs/src/c++/libhdfs
[user@localhost] ./configure JVM_ARCH=tune=k8 --with-java=/usr/java/jdk1.6.0_20
[user@localhost] make
[user@localhost] sudo su -
[root@localhost] cp .libs/libhdfs.so .libs/libhdfs.so.0 /usr/local/include
[root@localhost] cp hdfs.h /usr/local/include
[root@localhost] exit
16. scribe-2.2 - http://github.com/downloads/facebook/scribe/scribe-2.2.tar.gz - have to use scribe-2.1 or above to support HDFS.
[user@localhost] ./bootstrap.sh --enable-hdfs
[user@localhost] ./configure
[user@localhost] make
17. Configure and Run Hadoop (single-node cluster in this tutorial).
a. First we need to modify a few configuration files. From top-level hadoop directory, edit conf/hadoop-env.sh, conf/core-site.xml, conf/hdfs-site.xml and conf/macoded-site.xml files.
[user@localhost] more conf/hadoop-env.sh
export HADOOP_OPTS=-Djava.net.codeferIPv4Stack=true
# Set Hadoop-specific environment variables here.
# The only required environment variable is JAVA_HOME. All others are
# optional. When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.
# The java implementation to use. Required.
export JAVA_HOME=/usr/java/jdk1.6.0_20
.
.
.
[user@localhost] more conf/core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
[user@localhost] more conf/hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
</configuration>
[user@localhost] more conf/macoded-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>macoded.job.tracker</name>
<value>localhost:9001</value>
<description>The host and port that the Macodeduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
</configuration>
b. Format the namenode. From top-level hadoop directory
[user@localhost] bin/hadoop namenode -format
c. Start hadoop
[user@localhost] start-all.sh
d. Use jps to check if all the processes are started.
[user@localhost] jps
25362 JobTracker
24939 NameNode
25099 DataNode
25506 TaskTracker
25251 SecondaryNameNode
25553 Jps
e. Use netstat to check if port 9000 (set in core-site.xml) is listening.
[user@localhost] sudo netstat -nap | grep 9000
sudo netstat -nap | grep 9000
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 24939/java
tcp 0 0 127.0.0.1:9000 127.0.0.1:59957 ESTABLISHED 24939/java
tcp 0 0 127.0.0.1:9000 127.0.0.1:59960 ESTABLISHED 24939/java
tcp 0 0 127.0.0.1:59957 127.0.0.1:9000 ESTABLISHED 25099/java
tcp 0 0 127.0.0.1:59960 127.0.0.1:9000 ESTABLISHED 25251/java
f. Open a browser and type server_ip:50070 to check if it shows 1 Live Nodes in the Cluster Summary. Be patient, sometimes it takes some time (30 seconds to 1 minute) to show. Remember to put the cursor to the address bar and codess enter to refresh the page. For some reason, F5 (Reload) doesn't work for me.
18. Configure and Run Scribe
a. Set up the java class paths for Scribe. I have installed my hadoop 0.21.0 in ~/pkgs/hadoop-0.21.0 directory
[user@localhost] export CLASSPATH=~/pkgs/hadoop-0.21.0/hadoop-hdfs-0.21.0.jar:~/pkgs/hadoop-0.21.0/lib/commons-logging-1.1.1.jar:~/pkgs/hadoop-0.21.0/hadoop-common-0.21.0.jar
b. Edit a Scribe configuration file to use HDFS. Change to scribe src directory
[user@localhost] more scribe_hdfs.conf
port=1463
max_msg_per_second=2000000
check_interval=3
# DEFAULT
<store>
category=default
type=buffer
target_write_size=20480
max_write_interval=1
buffer_send_rate=2
retry_interval=30
retry_interval_range=10
<primary>
type=file
fs_type=hdfs
file_path=hdfs://localhost:9000/scribetest
create_symlink=no
use_hostname_sub_directory=yes
base_filename=thisisoverwritten
max_size=40000000
rotate_period=daily
rotate_hour=0
rotate_minute=5
add_newlines=1
</primary>
<secondary>
type=file
fs_type=std
file_path=/tmp/scribetest
base_filename=thisisoverwritten
max_size=3000000
</secondary>
</store>
c. Run Scribe
[user@localhost] scribed scribe_hdfs.conf
[Thu Sep 9 15:35:22 2010] "setrlimit error (setting max fd size)"
[Thu Sep 9 15:35:22 2010] "STATUS: STARTING"
[Thu Sep 9 15:35:22 2010] "STATUS: configuring"
[Thu Sep 9 15:35:22 2010] "got configuration data from file <scribe_hdfs.conf>"
[Thu Sep 9 15:35:22 2010] "CATEGORY : default"
[Thu Sep 9 15:35:22 2010] "Creating default store"
[Thu Sep 9 15:35:22 2010] "configured <1> stores"
[Thu Sep 9 15:35:22 2010] "STATUS: "
[Thu Sep 9 15:35:22 2010] "STATUS: ALIVE"
[Thu Sep 9 15:35:22 2010] "Starting scribe server on port 1463"
Thrift: Thu Sep 9 15:35:22 2010 libevent 1.4.13-stable method epoll
If it cannot run and complains the following:
libboost_system.so.1.40.0: cannot open shared object
Do the following:
[user@localhost] echo '/usr/local/lib/' >> /etc/ld.so.conf.d/my_boost.conf
/sbin/ldconfig -v
If you see the following output when running scribe:
[user@localhost] scribed scribe_hdfs.conf
[Thu Sep 9 15:39:38 2010] "setrlimit error (setting max fd size)"
[Thu Sep 9 15:39:38 2010] "STATUS: STARTING"
[Thu Sep 9 15:39:38 2010] "STATUS: configuring"
[Thu Sep 9 15:39:38 2010] "got configuration data from file <scribe_hdfs.conf>"
[Thu Sep 9 15:39:38 2010] "CATEGORY : default"
[Thu Sep 9 15:39:38 2010] "Creating default store"
[Thu Sep 9 15:39:38 2010] "configured <1> stores"
[Thu Sep 9 15:39:38 2010] "STATUS: "
[Thu Sep 9 15:39:38 2010] "STATUS: ALIVE"
[Thu Sep 9 15:39:38 2010] "Starting scribe server on port 1463"
[Thu Sep 9 15:39:38 2010] "Exception in main: TNonblockingServer::serve() bind"
[Thu Sep 9 15:39:38 2010] "scribe server exiting"
It may be due to Port 1463 not being available. Run "netstat -nap | grep 1463" to find out which program is using it.
19. Send something to Scribe to log in HDFS
From a different terminal, in top-level Scribe directory:
[user@localhost] echo "hello world" | examples/scribe_cat test
In the terminal that runs Scribe server, you should see the following output:
[Thu Sep 9 15:46:14 2010] "[test] Creating new category from model default"
[Thu Sep 9 15:46:14 2010] "store thread starting"
[Thu Sep 9 15:46:14 2010] "[hdfs] Connecting to HDFS"
[Thu Sep 9 15:46:14 2010] "[hdfs] Before hdfsConnectNewInstance(localhost, 9000)"
Sep 9, 2010 3:46:14 PM org.apache.hadoop.security.Groups
INFO: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
[Thu Sep 9 15:46:15 2010] "[hdfs] After hdfsConnectNewInstance"
[Thu Sep 9 15:46:15 2010] "[hdfs] Connecting to HDFS"
[Thu Sep 9 15:46:15 2010] "[hdfs] Before hdfsConnectNewInstance(localhost, 9000)"
[Thu Sep 9 15:46:15 2010] "[hdfs] After hdfsConnectNewInstance"
[Thu Sep 9 15:46:15 2010] "[hdfs] opened for write hdfs://localhost:9000/scribetest/test/localhost.localdomain/test-2010-09-09_00000"
[Thu Sep 9 15:46:15 2010] "[test] Opened file for writing"
[Thu Sep 9 15:46:15 2010] "[test] Opened file for writing"
[Thu Sep 9 15:46:15 2010] "[test] Changing state from to "
Opening Primary
[Thu Sep 9 15:46:15 2010] "[test] successfully read <0> entries from file "
[Thu Sep 9 15:46:15 2010] "[test] No more buffer files to send, switching to streaming mode"
[Thu Sep 9 15:46:15 2010] "[test] Changing state from to "
20. Check if the message has been logged to HDFS:
Stop Scribe first (either stop Scribe or make the file rotate, otherwise Hadoop won't write it to the filesystem.)
[user@localhost] hadoop fs -lsr /
10/09/09 16:26:02 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
10/09/09 16:26:03 WARN conf.Configuration: macoded.task.id is decodecated. Instead, use macodeduce.task.attempt.id
drwxr-xr-x - user supergroup 0 2010-09-09 16:21 /jobtracker
drwxr-xr-x - user supergroup 0 2010-09-09 16:21 /jobtracker/jobsInfo
drwxr-xr-x - user supergroup 0 2010-09-09 16:23 /scribetest
drwxr-xr-x - user supergroup 0 2010-09-09 16:23 /scribetest/test
drwxr-xr-x - user supergroup 0 2010-09-09 16:23 /scribetest/test/localhost.localdomain
-rw-r--r-- 3 user supergroup 13 2010-09-09 16:25 /scribetest/test/localhost.localdomain/test-2010-09-09_00000
drwxr-xr-x - user supergroup 0 2010-09-09 16:21 /tmp
drwxr-xr-x - user supergroup 0 2010-09-09 16:21 /tmp/hadoop
drwxr-xr-x - user supergroup 0 2010-09-09 16:21 /tmp/hadoop/macoded
drwx------ - user supergroup 0 2010-09-09 16:21 /tmp/hadoop/macoded/system
-rw------- 1 user supergroup 4 2010-09-09 16:21 /tmp/hadoop/macoded/system/jobtracker.info
Get the directory out to take a look:
[user@localhost] hadoop fs -get /scribetest test
10/09/09 16:26:47 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
10/09/09 16:26:47 WARN conf.Configuration: macoded.task.id is decodecated. Instead, use macodeduce.task.attempt.id
[user@localhost] more test/test/localhost.localdomain/test-2010-09-09_00000
hello world
One note about the Secondary store in Scribe configuration file: Scribe opens the files for both Primary and Secondary stores even in the normal situation as long as replay_buffer is true (default). Then it will try to delete the secondary store file when Primary store is handling the messages. This is causing problem because HDFS has not completed its access to the secondary store file. The following exception will happen:
[Thu Sep 9 16:02:03 2010] "[hdfs] deleteFile hdfs://localhost:9000/scribetest1/test/localhost.localdomain/test_00000"
[Thu Sep 9 16:02:03 2010] "[hdfs] Connecting to HDFS"
[Thu Sep 9 16:02:03 2010] "[hdfs] Before hdfsConnectNewInstance(localhost, 9000)"
[Thu Sep 9 16:02:03 2010] "[hdfs] After hdfsConnectNewInstance"
[Thu Sep 9 16:02:03 2010] "[test] No more buffer files to send, switching to streaming mode"
Exception in thread "main" java.io.IOException: Could not complete write to file /scribetest1/test/localhost.localdomain/test_00000 by DFSClient_1545136365
at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:720)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:342)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1350)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1346)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1344)
at org.apache.hadoop.ipc.Client.call(Client.java:905)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy0.complete(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy0.complete(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:1406)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:1393)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:66)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:91)
Call to org/apache/hadoop/fs/FSDataOutputStream::close failed!
[Thu Sep 9 16:02:03 2010] "[hdfs] closed hdfs://localhost:9000/scribetest1/test/localhost.localdomain/test_00000"
There is more information about this error in "NameNode Logs" link on http://server_ip:50070/dfshealth.jsp page.
To avoid this problem we should either set replay_buffer to false or make the seconardy store local instead of HDFS (the above configuration file example, scribe_hdfs.conf).
The following configuration is to set replay_buffer to false and both primary and secondary stores to use HDFS.
[user@localhost] more hdfs_both.conf
port=1463
max_msg_per_second=2000000
check_interval=3
# DEFAULT
<store>
category=default
type=buffer
replay_buffer=no
target_write_size=20480
max_write_interval=1
buffer_send_rate=2
retry_interval=30
retry_interval_range=10
<primary>
type=file
fs_type=hdfs
file_path=hdfs://localhost:9000/scribetest
create_symlink=no
use_hostname_sub_directory=yes
base_filename=thisisoverwritten
max_size=40000000
rotate_period=daily
rotate_hour=0
rotate_minute=5
add_newlines=1
</primary>
<secondary>
type=file
fs_type=hdfs
file_path=hdfs://localhost:9000/scribetest1
create_symlink=no
use_hostname_sub_directory=yes
base_filename=thisisoverwritten
max_size=40000000
rotate_period=daily
rotate_hour=0
rotate_minute=5
add_newlines=1
</secondary>
</store>
References:
1. Thomas Dudziak's blog: How to install Scribe with HDFS support on Ubuntu Karmic
2. Agile Testing: Compiling, installing and test-running Scribe
3. Google Scribe server group: Failover when writing to HDFS problems
Tuesday, August 17, 2010
Key Computer Architecture Techniques - Pipelining
Pipelining
It is an implementation technique where multiple instructions are overlapped in execution. The computer pipeline is divided in stages. Each stage completes a part of an instruction in parallel. The stages are connected one to the next to form a pipe - instructions enter at one end, progress through the stages, and exit at the other end.
Because the pipe stages are hooked together, all the stages must be ready to proceed at the same time. We call the time required to move an instruction one step further in the pipeline a machine cycle . The length of the machine cycle is determined by the time required for the slowest pipe stage.
For example, the classic RISC pipeline is broken into five stages with a set of flip flops between each stage.
1. Instruction fetch
2. Instruction decode and register fetch
3. Execute
4. Memory access
5. Register write back
Pipelining does not help in all cases. There are several possible disadvantages. An instruction pipeline is said to be fully pipelined if it can accept a new instruction every clock cycle. A pipeline that is not fully pipelined has wait cycles that delay the progress of the pipeline.
Advantages of Pipelining:
1. The cycle time of the processor is reduced, thus increasing instruction issue-rate in most cases.
2. Some combinational circuits such as adders or multipliers can be made faster by adding more circuitry. If pipelining is used instead, it can save circuitry vs. a more complex combinational circuit.
Disadvantages of Pipelining:
1. A non-pipelined processor executes only a single instruction at a time. This prevents branch delays (in effect, every branch is delayed) and problems with serial instructions being executed concurrently. Consequently the design is simpler and cheaper to manufacture.
2. The instruction latency in a non-pipelined processor is slightly lower than in a pipelined equivalent. This is due to the fact that extra flip flops (pipeline registers/buffers) must be added to the data path of a pipelined processor.
3. A non-pipelined processor will have a stable instruction bandwidth. The performance of a pipelined processor is much harder to predict and may vary more widely between different programs.
Labels:
computer architecture,
computer science,
jobs,
pipeline,
pipelining
Friday, August 13, 2010
Computer Performance
From Stanford EE282 + Computer Architecture: A Quantitative Approach, 4th Edition
CPUTime = Seconds/Program
= Cycles/Program * Seconds/Cycle
= Instructions/Program * Cycles/Instruction * Seconds/Cycle
= IC * CPI * CCT
IC: Instruction Count
CPI: Clock Cycles Per Instruction
CCT: Clock Cycle Time
Amdahl’s Law
Speedup = Execution time for entire task without using the enhancement/Execution time for entire task using the enhancement when possible
It should be greater than 1 (when there is an improvement, that is)
new_execution_time
= (original_execution_time * (1 - enhanced_fraction)) + original_execution_time * enhanced_fraction * (1 / speedup_enhanced)
= (original_execution_time)((1-enhanced_fraction) + enhanced_fraction/speedup_enhanced)
speedup_overall
= original_execution_time / new_execution_time
= 1/((1-enhanced_fraction) + enhanced_fraction/speedup_enhanced)
In the case of parallelization, Amdahl's law states that if enhanced_fraction is the proportion of a program that can be made parallel (i.e. benefit from parallelization), and (1 − enhanced_fraction) is the proportion that cannot be parallelized (remains serial), then the maximum speedup that can be achieved by using N processors (N times faster for the part that can be enhanced = speedup_enhanced) is
speedup_overall
= 1/((1-enhanced_fraction) + enhanced_fraction/speedup_enhanced)
In the limit, as N tends to infinity, the maximum speedup tends to 1 / (1 − enhanced_fraction).
- Latency or execution time or response time
- Wall-clock time to complete a task
- Bandwidth or throughput or execute rate
- Number of tasks completed per unit of time
CPUTime = Seconds/Program
= Cycles/Program * Seconds/Cycle
= Instructions/Program * Cycles/Instruction * Seconds/Cycle
= IC * CPI * CCT
IC: Instruction Count
CPI: Clock Cycles Per Instruction
CCT: Clock Cycle Time
Amdahl’s Law
Speedup = Execution time for entire task without using the enhancement/Execution time for entire task using the enhancement when possible
It should be greater than 1 (when there is an improvement, that is)
new_execution_time
= (original_execution_time * (1 - enhanced_fraction)) + original_execution_time * enhanced_fraction * (1 / speedup_enhanced)
= (original_execution_time)((1-enhanced_fraction) + enhanced_fraction/speedup_enhanced)
speedup_overall
= original_execution_time / new_execution_time
= 1/((1-enhanced_fraction) + enhanced_fraction/speedup_enhanced)
In the case of parallelization, Amdahl's law states that if enhanced_fraction is the proportion of a program that can be made parallel (i.e. benefit from parallelization), and (1 − enhanced_fraction) is the proportion that cannot be parallelized (remains serial), then the maximum speedup that can be achieved by using N processors (N times faster for the part that can be enhanced = speedup_enhanced) is
speedup_overall
= 1/((1-enhanced_fraction) + enhanced_fraction/speedup_enhanced)
In the limit, as N tends to infinity, the maximum speedup tends to 1 / (1 − enhanced_fraction).
Friday, August 6, 2010
System Diagram of A Modern Laptop
From Wikipedia Intel X58
- Intel X58: Intel X58 Chipset
- QPI: The Intel QuickPath Interconnect is a point-to-point processor interconnect developed by Intel to compete with HyperTransport
- I/O Controller Hub (ICH), also known as Intel 82801, is an Intel southbridge on motherboards with Intel chipsets (Intel Hub Architecture). As with any other southbridge, the ICH is used to connect and control peripheral devices.
- EHCI: The Enhanced Host Controller Interface (EHCI) specification describes the register-level interface for a Host Controller for the Universal Serial Bus (USB) Revision 2.0.
- DMI: Direct Media Interface (DMI) is point-to-point interconnection between an Intel northbridge and an Intel southbridge on a computer motherboard. It is the successor of the Hub Interface used in previous chipsets. It provides for a 10Gb/s bidirectional data rate.
- LPC: The Low Pin Count bus, or LPC bus, is used on IBM-compatible personal computers to connect low-bandwidth devices to the CPU, such as the boot ROM and the "legacy" I/O devices (behind a super I/O chip).
- SPI: The Serial Peripheral Interface Bus or SPI (pronounced "ess-pee-i" or "spy") bus is a synchronous serial data link standard named by Motorola that operates in full duplex mode. Devices communicate in master/slave mode where the master device initiates the data frame.
- Intel Matrix Storage Technology: It provides new levels of protection, performance, and expandability for desktop and mobile platforms. Whether using one or multiple hard drives, users can take advantage of enhanced performance and lower power consumption. When using more than one drive, the user can have additional protection against data loss in the event of a hard drive failure.
- Intel Turbo Memory with User Pinning: An on-motherboard flash card, Intel's Turbo Memory is designed to act as another layer in the memory hierarchy, caching data where possible and improving performance/battery life in notebooks. User pinning offers more options to the user to improve system applications launch time and responsiveness.
Labels:
computer architecture,
diagram,
Intel,
Laptop,
Processor
Heap Memory
From Stanford CS Ed Library: Pointers and Memory
"Heap" memory, also known as "dynamic" memory, is an alternative to local stack memory. Local memory is quite automatic — it is allocated automatically on function call and it is deallocated automatically when a function exits. Heap memory is different in every way. The programmer explicitly requests the allocation of a memory "block" of a particular size, and the block continues to be allocated until the programmer
explicitly requests that it be deallocated. Nothing happens automatically. So the programmer has much greater control of memory, but with greater responsibility since the memory must now be actively managed.
The advantages of heap memory are...
"Heap" memory, also known as "dynamic" memory, is an alternative to local stack memory. Local memory is quite automatic — it is allocated automatically on function call and it is deallocated automatically when a function exits. Heap memory is different in every way. The programmer explicitly requests the allocation of a memory "block" of a particular size, and the block continues to be allocated until the programmer
explicitly requests that it be deallocated. Nothing happens automatically. So the programmer has much greater control of memory, but with greater responsibility since the memory must now be actively managed.
The advantages of heap memory are...
- Lifetime. Because the programmer now controls exactly when memory is allocated and deallocated, it is possible to build a data structure in memory, and return that data structure to the caller. This was never possible with local memory which was automatically deallocated when the function exited.
- Size. The size of allocated memory can be controlled with more detail. For example, a string buffer can be allocated at run-time which is exactly the right size to hold a particular string. With local memory, the code is more likely to declare a buffer size 1000 and hope for the best.
- More Work. Heap allocation needs to arranged explicitly in the code which is just more work.
- More Bugs. Because it's now done explicitly in the code, realistically on occasion the allocation will be done incorrectly leading to memory bugs. Local memory is constrained, but at least it's never wrong.
Thursday, August 5, 2010
Classic swap function in C and C++
In C:
In C++:
void Swap(int* a, int* b)
{
int temp;
temp = *a;
*a = *b;
*b = temp;
}
void SwapCaller()
{
int x = 1;
int y = 2;
// Use & to pass pointers to the int values of interest
Swap(&x, &y);
}
In C++:
// The & declares pass by reference
void Swap(int& a, int& b)
{
int temp;
// No *'s required -- the compiler takes care of it
temp = a;
a = b;
b = temp;
}
void SwapCaller()
{
int x = 1;
int y = 2;
// No &'s required -- the compiler takes care of it
Swap(x, y);
}
How Does Call Stack Work?
From Stanford CS Ed Library: Pointers and Memory
The exact details of the implementation are language and compiler specific. However, the basic structure below is approximates the method used by many different systems and languages...
To call a function such as foo(6, x+1)...
1. Evaluate the actual parameter expressions, such as the x+1, in the caller's context.
2. Allocate memory for foo()'s locals by pushing a suitable "local block" of memory onto a runtime "call stack" dedicated to this purpose. For parameters but not local variables, store the values from step (1) into the appropriate slot in foo()'s local block.
3. Store the caller's current address of execution (its "return address") and switch execution to foo().
4. foo() executes with its local block conveniently available at the end of the call stack.
5. When foo() is finished, it exits by popping its locals off the stack and "returns" to the caller using the previously stored return address. Now the caller's locals are on the end of the stack and it can resume executing.
The exact details of the implementation are language and compiler specific. However, the basic structure below is approximates the method used by many different systems and languages...
To call a function such as foo(6, x+1)...
1. Evaluate the actual parameter expressions, such as the x+1, in the caller's context.
2. Allocate memory for foo()'s locals by pushing a suitable "local block" of memory onto a runtime "call stack" dedicated to this purpose. For parameters but not local variables, store the values from step (1) into the appropriate slot in foo()'s local block.
3. Store the caller's current address of execution (its "return address") and switch execution to foo().
4. foo() executes with its local block conveniently available at the end of the call stack.
5. When foo() is finished, it exits by popping its locals off the stack and "returns" to the caller using the previously stored return address. Now the caller's locals are on the end of the stack and it can resume executing.
Monday, July 12, 2010
Algorithms - Sorting
From Wikipedia Sorting Algorithm.
Bubble sort average case and worst case are both O(n²).
The following table describes sorting algorithms that are not comparison sorts. As such, they are not limited by a O(nlogn) lower bound. Complexities below are in terms of n, the number of items to be sorted, k, the size of each key, and s, the chunk size used by the implementation. Many of them are based on the assumption that the key size is large enough that all entries have unique key values, and hence that n << 2k, where << means "much less than."
Bubble sort
Bubble sort is a straightforward and simplistic method of sorting data that is used in computer science education. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, then it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass. This algorithm is highly inefficient, and is rarely used, except as a simplistic example.Bubble sort average case and worst case are both O(n²).
ResultCode BubbleSort(int a[], int nSize)
{
ResultCode res = SORT_FAILED;
int i, j = 0;
bool bSwap = true;
int tmp;
if (a)
{
while (bSwap)
{
bSwap = false;
j++;
for (i = 0; i < (nSize - j); i++)
{
if (a[i] > a[i + 1])
{
tmp = a[i];
a[i] = a[i + 1];
a[i + 1] = tmp;
bSwap = true;
}
}
}
res = SORT_OK;
}
return res;
}
Selection sort
Selection sort is a sorting algorithm, specifically an in-place comparison sort. It has O(n2) complexity, making it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity, and also has performance advantages over more complicated algorithms in certain situations.
ResultCode SelectionSort(int numbers[], int array_size)
{
int i, j;
int min, temp;
ResultCode res = SORT_FAILED;
if (numbers)
{
// for each number, find the min from the list of i to the end
// and then swap the min with i;
for (i = 0; i < array_size-1; i++)
{
// set min to i
min = i;
// check each one from i + 1 to the end of the array
// to find the minimum number from the list of numbers
// from i+1 to the end
for (j = i+1; j < array_size; j++)
{
// if any number is less than the min, set min to the new index
if (numbers[j] < numbers[min])
{
min = j;
}
}
// swap the minimum with i
temp = numbers[i];
numbers[i] = numbers[min];
numbers[min] = temp;
}
res = SORT_OK;
}
return res;
}
Insertion sort
Main article: Insertion sort
Insertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly-sorted lists, and often is used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list. In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one. Shell sort (see below) is a variant of insertion sort that is more efficient for larger lists.
ResultCode InsertionSort(int arr[], int length)
{
int i, j, tmp;
ResultCode res = SORT_FAILED;
if (arr)
{
for (i = 1; i < length; i++)
{
j = i;
while (j > 0 && arr[j - 1] > arr[j])
{
tmp = arr[j];
arr[j] = arr[j - 1];
arr[j - 1] = tmp;
j--;
}
}
res = SORT_OK;
}
return res;
}
Shell sort
Main article: Shell sort
Shell sort was invented by Donald Shell in 1959. It improves upon bubble sort and insertion sort by moving out of order elements more than one position at a time. One implementation can be described as arranging the data sequence in a two-dimensional array and then sorting the columns of the array using insertion sort.
ResultCode ShellSort(int num[], int length)
{
int i, temp, flag = 1;
int d = length;
ResultCode res = SORT_FAILED;
if (num)
{
while(flag || (d > 1))
{
flag = 0;
d = (d + 1) / 2; // use (d + 1)/2 as the gap
// starting from positon 0 to length - d - 1, comparing num[i]
//and num[i+d]
for (i = 0; i < (length - d); i++)
{
if (num[i + d] > num[i])
{
// swap positions i+d and i
temp = num[i + d];
num[i + d] = num[i];
num[i] = temp;
// tells swap has occurred
flag = 1; }
}
}
res = SORT_OK;
}
return res;
}
Binary tree sort
typedef struct tree_el {
int val;
struct tree_el * right, * left;
} node;
void BinaryTreeSort(node **tree, node *item)
{
if(!(*tree)) {
*tree = item;
return;
}
if(item->val <= (*tree)->val)
BinaryTreeSort(&(*tree)->left, item);
else if(item->val>(*tree)->val)
BinaryTreeSort(&(*tree)->right, item);
}
void printout(node * tree) {
if(tree->left) printout(tree->left);
printf("%d\n",tree->val);
if(tree->right) printout(tree->right);
}
Merge sort
Main article: Merge sort
Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list. Of the algorithms described here, this is the first that scales well to very large lists, because its worst-case running time is O(n log n). Merge sort has seen a relatively recent surge in popularity for practical implementations, being used for the standard sort routine in the programming languages Perl[5], Python (as timsort[6]), and Java (also uses timsort as of JDK7[7]), among others. Merge sort has been used in Java at least since 2000 in JDK1.3.[8][9]
void Merge(int a[], int low, int high, int mid)
{
int i, j, k, c[high - low];
i = low;
j = mid + 1;
k = 0;
while((i <= mid) && (j <= high))
{
if (a[i] < a[j])
{
c[k] = a[i];
k++;
i++;
}
else
{
c[k] = a[j];
k++;
j++;
}
}
while (i <= mid)
{
c[k] = a[i];
k++;
i++;
}
while(j <= high)
{
c[k] = a[j];
k++;
j++;
}
for(i = low; i < (high + 1); i++)
{
a[i] = c[i - low];
}
}
ResultCode MergeSort(int a[], int low, int high)
{
ResultCode res = SORT_FAILED;
int mid;
if (a)
{
if (low < high)
{
mid = (low + high) / 2;
MergeSort (a, low, mid);
MergeSort(a, mid + 1, high);
Merge(a, low, high, mid);
}
res = SORT_OK;
}
return res;
}
Heapsort
Main article: Heapsort
Heapsort is a much more efficient version of selection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called a heap, a special type of binary tree. Once the data list has been made into a heap, the root node is guaranteed to be the largest(or smallest) element. When it is removed and placed at the end of the list, the heap is rearranged so the largest element remaining moves to the root. Using the heap, finding the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple selection sort. This allows Heapsort to run in O(n log n) time, and this is also the worst case complexity.
// node index starts at 0
#define left(i) 2*i+1
#define right(i) 2*i+2
// In computer science, a heap is a specialized tree-based data structure
// that satisfies the heap property: if B is a child node of A, then
// key(A) = key(B). This implies that an element with the greatest key is
// always in the root node, and so such a heap is sometimes called a max-heap.
void max_heapify(int a[],int i,int size)
{
int l,r,largest,temp;
l=left(i);
r=right(i);
if(l<=size-1 && a[l]>a[i])
largest = l;
else largest = i;
if(r<=size-1 && a[r]>a[largest])
largest = r;
if(largest != i)
{
temp = a[i];
a[i]=a[largest];
a[largest]=temp;
// since we change the value of the "largest" (index) node,
// we have to make sure there is still a max heap below that.
max_heapify(a,largest,size);
}
}
void build_max_heap(int a[],int size)
{
int i;
// starting from rightmost parent going upward.
for(i=size/2 - 1 ;i>=0;i--)
max_heapify(a,i,size);
}
ResultCode HeapSort(int a[],int size)
{
int i,temp;
ResultCode res = SORT_FAILED;
if (a)
{
build_max_heap(a,size);
for(i=size-1;i>0;i--)
{
// swap the root (largest value) and the end
temp=a[i];
a[i]=a[0];
a[0]=temp;
size=size - 1;
max_heapify(a,0,size);
}
res = SORT_OK;
}
return res;
}
Quicksort
Main article: Quicksort
Quicksort is a divide and conquer algorithm which relies on a partition operation: to partition an array, we choose an element, called a pivot, move all smaller elements before the pivot, and move all greater elements after it. This can be done efficiently in linear time and in-place. We then recursively sort the lesser and greater sublists. Efficient implementations of quicksort (with in-place partitioning) are typically unstable sorts and somewhat complex, but are among the fastest sorting algorithms in practice. Together with its modest O(log n) space usage, this makes quicksort one of the most popular sorting algorithms, available in many standard libraries. The most complex issue in quicksort is choosing a good pivot element; consistently poor choices of pivots can result in drastically slower O(n²) performance, but if at each step we choose the median as the pivot then it works in O(n log n). Finding the median, however, is an O(n) operation on unsorted lists, and therefore exacts its own penalty.
ResultCode QuickSort(int arr[], int left, int right)
{
int i = left, j = right;
int tmp;
ResultCode res = SORT_FAILED;
if (arr)
{
int pivot = arr[(left + right) / 2];
/* partition */
while (i <= j)
{
while (arr[i] < pivot)
{
i++;
}
while (arr[j] > pivot)
{
j--;
}
if (i <= j)
{
tmp = arr[i];
arr[i] = arr[j];
arr[j] = tmp;
i++;
j--;
}
};
/* recursion */
if (left < j)
{
QuickSort(arr, left, j);
}
if (i < right)
{
QuickSort(arr, i, right);
}
res = SORT_OK;
}
return res;
}
Counting Sort
Main article: Counting sort
Counting sort is applicable when each input is known to belong to a particular set, S, of possibilities. The algorithm runs in O(|S| + n) time and O(|S|) memory where n is the length of the input. It works by creating an integer array of size |S| and using the ith bin to count the occurrences of the ith member of S in the input. Each input is then counted by incrementing the value of its corresponding bin. Afterward, the counting array is looped through to arrange all of the inputs in order. This sorting algorithm cannot often be used because S needs to be reasonably small for it to be efficient, but the algorithm is extremely fast and demonstrates great asymptotic behavior as n increases. It also can be modified to provide stable behavior.
ResultCode CountingSort (int array[], int size)
{
int i, min, max;
ResultCode res = SORT_FAILED;
if (array)
{
min = max = array[0];
for(i = 1; i < size; i++)
{
if (array[i] < min)
min = array[i];
else if (array[i] > max)
max = array[i];
}
// under the assumption of range being smaller than size, or it's
// not worth it.
int range = max - min + 1;
int *count = (int *)malloc(range * sizeof(int));
for(i = 0; i < range; i++)
{
count[i] = 0;
}
// count index + 2 is the value,
// counting freqency of that value.
for(i = 0; i < size; i++)
{
count[ array[i] - min ]++;
}
int j, z = 0;
for(i = min; i <= max; i++)
{
for(j = 0; j < count[ i - min ]; j++)
{
array[z++] = i;
}
}
free(count);
res = SORT_OK;
}
return res;
}
Bucket sort
Main article: Bucket sort
Bucket sort is a divide and conquer sorting algorithm that generalizes Counting sort by partitioning an array into a finite number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. Thus this is most effective on data whose values are limited (e.g. a sort of a million integers ranging from 1 to 1000). A variation of this method called the single buffered count sort is faster than quicksort and takes about the same time to run on any set of data.
// Here array is the array to be sorted and n is the number of buckets to
// use. The function msbits(x,k) returns the k most significant bits of x
// (msbits = floor(x/2^(size(x)-k)), floor function returns the largest
// integral value not greater than x.)
// uucket sort assumes that the input is generated by a random process
// that distributes elements uniformly over the interval [0, 1), in this
// code, the range is [0, 999]
ResultCode BucketSort(int *a,int n)
{
vector bucket[10];
int i, j;
int idx = 0;
ResultCode res = SORT_FAILED;
if (a)
{
for (i = 0; i < n; i++)
{
bucket[(a[i]/100)].push_back(a[i]);
}
for (i = 0; i < 10; i++)
{
// creating a new array.
int arr[bucket[i].size()];
// copy the vector data to array because our InsertionSort
// accepts array.
for (j = 0; j < bucket[i].size(); j++)
{
arr[j] = bucket[i][j];
}
InsertionSort(arr, bucket[i].size());
// put the sorted bucket to the original array.
for (j = 0; j < bucket[i].size(); j++)
{
a[idx++] = arr[j];
}
}
res = SORT_OK;
}
return res;
}
Radix sort
Main article: Radix sort
Radix sort is an algorithm that sorts a list of fixed-size numbers of length k in O(n · k) time by treating them as bit strings. We first sort the list by the least significant bit while preserving their relative order using a stable sort. Then we sort them by the next bit, and so on from right to left, and the list will end up sorted. Most often, the counting sort algorithm is used to accomplish the bitwise sorting, since the number of values a bit can have is minimal - only '1' or '0'.
ResultCode RadixSort(int *a,int n)
{
int i;
int b[n], m=0, exp=1;
ResultCode res = SORT_FAILED;
if (a)
{
// Find the maximum number
for (i=0;i m)
{
m = a[i];
}
}
// starting from the last digit
while ((m/exp) > 0)
{
int bucket[10]={0};
// put each number in the bucket according to the value in the
// inspected digit
for(i = 0; i < n; i++)
{
bucket[(a[i]/exp) % 10]++;
}
// accumulate the values to the buckets
for(i = 1;i < 10; i++)
{
bucket[i] += bucket[i-1];
}
// smart!
// use the bucket value - 1 (because it's the sequence!) to
// for indices of the new array.
for(i = n-1;i >= 0; i--)
{
b[--bucket[a[i]/exp%10]] = a[i];
}
for(i = 0;i < n;i++)
{
a[i] = b[i];
}
exp*=10;
}
res = SORT_OK;
}
return res;
}
Name
|
Average
|
Worst
|
Memory
|
Stable
|
Method
|
Notes
|
Bubble Sort
|
n2
|
n2
|
1
|
Yes
|
Exchanging
|
Tiny code
|
Selection Sort
|
n2
|
n2
|
1
|
Depends
|
Selection
|
Its stability depends on the implementation. Used to sort
this table in Safari or other Webkit web browser |
Insertion Sort
|
n2
|
n2
|
1
|
Yes
|
Insertion
|
Average case is also O(n + d), where d is the number of
inversions |
Shell Sort
|
-
|
nlog2n
|
1
|
No
|
Insertion
| |
Binary Tree Sort
|
nlogn
|
nlogn
|
n
|
Yes
|
Insertion
|
When using a self-balancing binary search tree
|
Merge Sort
|
nlogn
|
nlogn
|
n
|
Yes
|
Merging
|
Used to sort this table in Firefox
|
Heapsort
|
nlogn
|
nlogn
|
1
|
No
|
Selection
| |
Quicksort
|
nlogn
|
n2
|
logn
|
Depends
|
Partitioning
|
Can be implemented as a stable sort depending on how the
pivot is handled. Naïve variants use O(n) space |
The following table describes sorting algorithms that are not comparison sorts. As such, they are not limited by a O(nlogn) lower bound. Complexities below are in terms of n, the number of items to be sorted, k, the size of each key, and s, the chunk size used by the implementation. Many of them are based on the assumption that the key size is large enough that all entries have unique key values, and hence that n << 2k, where << means "much less than."
Name
|
Average
|
Worst
|
Memory
|
Stable
|
n
<< 2k |
Notes
|
Bucket Sort
|
n * k
|
n2 * k
|
n * k
|
Yes
|
No
|
Assumes uniform distribution of elements from the domain
in the array. |
Counting Sort
|
n + k
|
n + k
|
n + 2k
|
Yes
|
Yes
|
If k = O(n) then RT = O(n)
|
LSD Radix Sort
|
n * k/s
|
n * k/s
|
n
|
Yes
|
No
|
Labels:
Algorithms,
c,
c++,
computer science,
programming,
Sorting
Subscribe to:
Posts (Atom)