CSC 464-551-01      Term II   2008-09     PARALLEL PROGRAMMING

3:30 – 4:45 pm    MW       Room: E 200

 

PREREQUISITES.   CSC 172

 

INSTRUCTOR.    Dr. M. S. Skaff                Office: E204       Telephone: 993-3376

                                                                      E-mail: skaffms@udmercy.edu

WEB SITE.  http://skaffms.faculty.udmercy.edu/index.html

 

OFFICE HOURS.  11:00-12:00 am   MWF, 2:00-3:00 pm  MW, and anytime my office door is open or by appointment.

 

TEXTBOOK. The Art of Parallel Programming by Bruce Lester, 2nd edition, 1st World Publishing, 2006, ISBN: 1595408398

 

COURSE OBJECTIVES.  This course presents a comprehensive overview of the entire field of parallel computing by studying parallel programming. Programming is important in computing and it brings together elements from various areas including algorithms, languages, and computer architecture. To write a parallel computer program, one must first formulate an efficient parallel algorithm (as stated by the author). Then the algorithm must be expressed in a parallel programming language. The effectiveness and efficiency of the program must take into account the parallel computer architecture. Finally, a knowledge of good debugging techniques and performance evaluation are needed to create a finished product.

      The course will emphasize a continual interplay between parallel algorithms, languages, architecture, and performance evaluation. The student will develop a new level of programming skill. This will be accomplished by studying the major programming techniques for parallel programs including both shared-memory and distributed-memory parallel computers. As each new programming technique is introduced, the necessary parallel language support features are also introduced, and several parallel algorithms are used as examples to illustrate the technique.

      Students will definitely use a “hands-on” approach as each chapter has programming projects where the student must write and debug their own parallel programs. These programs will utilize a parallel programming language C* which accompanies the textbook. The purpose of studying parallel programming is to expand a student’s sequential view of computing to a view where hundreds and thousands of sequential computing activities all operate at the same time.

 

COURSE OBJECTIVES.  After taking this course, students will be able to understand:

1.      The C* programming language and how it differs from C++.

2.      The programming of shared-memory multiprocessors. This includes knowledge

of the concept of data parallelism, multiprocessor architecture, process communication, data sharing, and synchronous parallelism.

3.      Parallel program degradation, process creation overhead and sequential portions of code.

 

CSC 464-551-01      Term II   2008-09     PARALLEL PROGRAMMING   p.2

 

COURSE OBJECTIVES. (Con’t)

 

4.      An overview of shared-memory multiprocessor architecture emphasizing caching and memory system organization.

5.      The two major types of parallel process interaction: process communication and data sharing. Use of spinlocks and atomic operations for process data sharing.

6.      Synchronous iteration for parallel programs.

7.      Multicomputer topology and communications in distributed-memory parallel computers.

8.      Message-passing programming style and its application for creating efficient programs for multicomputers as well as how to achieve good performance in multicomputer programs.

9.      The MPI standard.

 

COURSE OUTCOMES.   After taking this course, a student should be able to:

1.     Write, debug, and implement parallel programs using the C* language.

2.     Determine the efficiency and speedup factors for parallel programming.

3.     Understand why parallel programming is important.

4.     Know the concepts in shared-memory parallel programming including the topics of data parallelism, multiprocessor architecture, process communication, data sharing, and synchronous parallelism.

5.    Know the concepts in distributed-memory parallel programming including the topics of multicomputer architecture, message-passing programs, data partitioning, MPI standard library, and replicated workers.

6.     Understand the difference between C* and C, C++ programming languages.

 

EXAMINATIONS. There will not be any in class exams. In place of exams, students will complete parallel programming projects and be required to turn in homework based upon the exercises at the end of each chapter. Which projects and homework that will be required is listed below.

 

PROJECTS. 

CSC 464 students:  Two projects are required where one (1) project must be chosen from

                                Category A and one (1) from Category B defined below.

CSC 551 students:  Three projects are required where two (2) projects must be chosen

                                from Category A and one (1) from Category B defined below.

PROJECT SELECTION:                                                 

Category A:   Project 1 or 2 on pp. 88-90 ( not both)

                       Project 1 or 2 on pp. 144-146  ( not both)

                       Project 1 or 2 on pp. 175-176  ( not both)

                       Project 1 or 2 on pp. 215-218  ( not both)

 

Category A- CSC 464 Due Date: Feb 18, 2009

                     CSC 551 Due Date: Feb 11, 2009, March 18, 2009

 

CSC 464-551-01      Term II   2008-09     PARALLEL PROGRAMMING   p.3

 

PROJECT SELECTION (Con’t):

 

Category B:    Project 1 or 2 or 3 on pp. 301-303  ( not both)

                        Project 1 or 2 or 3 on pp. 354-357  ( not both)

                        Project 1 or 2 on pp. 398-400  ( not both)

 Category B Due Date: April 15, 2009

 

HOMEWORK ASSIGNMENTS: 

 

Assignment        Start Page   Problems                Due Date

           1               52                 3, 5                          Jan 14, 2009

           2               91                 4-8, 11, 13, 16, 20  Jan 28, 2009

           3               118               3, 7, 9, 11                Feb 11, 2009         

           4               147               3, 6, 8, 10                Feb 25, 2009

           5               176               1, 2, 9                      Mar 11, 2009

           6               218               3, 4, 10, 11, 12        Mar 25, 2009

           7               261               1, 3, 4, 6, 7, 13        Apr 8, 2009

 

GRADING:      Projects:                                       60%

                          Homework:                                  30%

                          Class Participation, attendance:   10%

 

IMPORTANT FACTS. Last day to withdraw with no W – January 30, 2009

                                         Last day to withdraw with a W   - March 27, 2009

                                         Spring Break – March 2 – 7, 2009

                                         Final Exam Week:  April 20-25, 2009

                                         No Class- Jan 19, 2009

 

ACADEMIC INTEGRITY. Everything submitted for grading is expected to be student’s own work. Anything suspected as being otherwise the case will be dealt with according to the College policy.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

CSC 464-551-01      Term II   2008-09     PARALLEL PROGRAMMING   p.4

 

REPORT OUTLINE REQUIREMENTS.  Project reports must follow the outline below:

1.      TITLE PAGE.  Center title of report to the page with your name and CSC 464-551 in the lower right hand corner.

2.      SUMMARY FOR EXECUTIVE READER.  A brief (less than one page) summary of the content and purpose of the report.

3.      TABLE of CONTENTS. (optional). Show page numbers for major sections of the report (if any).

4.       PROJECT ANALYSIS.  This section contains the principal information of the report showing all analysis, graphs, and solutions.

5.       PROGRAM CODE ( if any).

6.      OUTPUT.   This section documents and shows any computer output that exists.

7.      CONCLUSION.  A brief paragraph summarizing the report. Did it accomplish its goal? Why? Why not?

8.      REFERENCES.  List any references that are utilized in the paper including textbooks, library books, or internet information.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

CSC 464-551-01      Term II   2008-09     PARALLEL PROGRAMMING   p.5

 

 

PARALLEL LABORATORY INSTRUCTIONS

 

These instructions are to be followed when accessing the parallel programming software in the laboratory.

 

1.      Pick any computer to be the host computer. All other computers are called Nodes. Go to the E:drive (local drive not C)on the host computer.  Create a new folder call TEMP (use any name you like).

2.      Click on START. Click on MY COMPUTER. Click on local drive (C:). Click on PROGRAM FILES. Click on MPICH.

3.      Click on SDK folder. Click on EXAMPLES folder. Click on NT folder.

4.      Copy the contents of the NT folder to the TEMP folder you created in step 1.

5.      Start Visual C++ version 6.0 by clicking on START, PROGRAM FILES, and Microsoft Visual Studio 6.0.

6.      In the visual C++  window, CLOSE the Tip of the Day window ( if it appears). Click FILES. Click OPEN “Workfile”. Browse to TEMP folder in the E:drive and double click EXAMPLES.DSW.

7.      Right Click on the specific PROJECT application you wish to run (eg., cpi, mpptest, …). Then left click SET AS ACTIVE PROJECT.

8.      Click on PROJECT.  Click SETTINGS.

9.      Click on C/C++.

Under SETTINGS FOR: select WIN 32 DEBUG.

Under CATEGORY: select CODE GENERATION

Under RUN TIME LIBRARY:  select DEBUG MULTITHREADED

10.  Click on C/C++

Under SETTINGS FOR: select WIN 32 RELEASE

Under CATEGORY: select CODE GENERATION

Under RUN TIME LIBRARY: select MULTITHREADED

11.  Click on C/C++

Under SETTINGS FOR: select “All Configurations

Under CATEGORY: select “Preprocessor”

Under ADDITIONAL INCLUDE DIRECTORIES: delete or clear field and enter E:\PROGRAM FILES\MPICH\SDK\include

12.  Click on LINK.

Under SETTINGS FOR: select “All Configurations

Under CATEGORY: select “Input”

Under ADDITIONAL LIBRARY PATH:  enter the same path as in step 11 except replacing “include” with “lib”

13.  Click on LINK.

Under SETTINGS FOR: select “Win 32 Debug”

Under CATEGORY: select “General”

Under OBJECT/LIBRARY MODULES: Insert (or precede) with “ws2_32.lib mpichd.lib  (be sure to have spaces between each entry)

CSC 464-551-01      Term II   2008-09     PARALLEL PROGRAMMING   p.6

 

PARALLEL LABORATORY INSTRUCTIONS (Con’t)

 

14.  Click on LINK.

Under SETTINGS FOR:  select  Win 32 Release

Under CATEGORY: select “General”

Under OBJECT/LIBRARY MODULES: Inset (or precede) with “ws2_32.lib mpich.lib  (be sure to have spaces between each entry.)

15.  Click on OK

16.  Select the project to be processed. Right click on the project name and select “Set as active project”.  Click on BUILD and executable name.exe ( eg., build cpi.exe)

17.  If there are no errors from BUILD operation, The executable (eg., cpi.exe) resides in E:\TEMP\BASIC\PDEBUG.  Copy this executable to the TEMP folder in E: drive on the HOST computer.

18.  Go to the C:\Program Files\mpich folder and click the MPD folder. Click the BIN folder. Copy the MPIRun.exe program from the BIN folder to the HOST computer TEMP folder in the E:drive.

19.  Copy the TEMP folder on the E:drive in the Host Computer to a Floppy. Then  copy the contents of the Floppy to E:\TEMP folder in each Node computer

20.  Next we need to configure the computers required to run the application program. To do this we click on START, click on PROGRAM FILES, click on MPICH on the HOST computer. Then click on MPD folder. Click on MPICH Configuration Tool. Select the computer ID’s for each computer to be used as Nodes. The Host computer ID is already on the screen. Click the ID’s of those computers to be used as Nodes. (Highlight all those to be selected and Click Add).

21.  To run the application:  go to the TEMP folder on the HOST computer. In DOS mode at the E:\TEMP> prompt, for all applications except Mandel type  mpirun  -np  k  -logon executable name  where k=number of nodes+1. For example, if one host and 2 nodes are used, then k=3. We then type mpirunnp 3 –logon cpi.  For the MANDEL application we would use  mpirun -np k  -localonly  mandel.

 

NOTE: User name and password will change. Currently, it is User name: dpl_ , Password: dpl