PDC in CS2013 - extended
 Share
The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss

View only
 
ABCDEFGHIJ
1
2
3
KAKUTierLevel
Number
PDCLearning Outcome
4
ARAssembly level machine organization2
Familiarity
2pDescribe how an instruction is executed in a classical von Neumann machine, with extensions for threads, multiprocessor synchronization, and SIMD execution
5
ARAssembly level machine organization2
Familiarity
3pDescribe instruction level parallelism and hazards, and how they are managed in typical processor pipelines
6
ARDigital logic and digital systems2
Familiarity
2pComprehend the trend of modern computer architectures towards multi-core and that parallelism is inherent in all hardware systems
7
ARDigital logic and digital systems2
Familiarity
3pExplain the implications of the "power wall" in terms of further processor performance improvements and the drive towards harnessing parallelism
8
ARFunctional organization3
Familiarity
3pExplain basic instruction level parallelism using pipelining and the major hazards that may occur
9
ARInterfacing and communication2
Familiarity
4dCompare common network organizations, such as ethernet/bus, ring, switched vs. routed
10
ARMultiprocessing and alternative architectures3
Familiarity
3dExplain the concept of interconnection networks and characterize different approaches
11
ARMultiprocessing and alternative architectures3
Familiarity
5dDescribe the differences between memory backplane, processor memory interconnect, and remote memory via networks
12
ARMultiprocessing and alternative architectures3
Familiarity
1pDiscuss the concept of parallel processing beyond the classical von Neumann model
13
ARMultiprocessing and alternative architectures3
Familiarity
2pDescribe alternative architectures such as SIMD and MIMD
14
ARMultiprocessing and alternative architectures3
Familiarity
4pDiscuss the special concerns that multiprocessing systems present with respect to memory management and describe how these are addressed
15
ARPerformance enhancements3
Familiarity
5pDiscuss the performance advantages that multithreading offered in an architecture along with the factors that make it difficult to derive maximum benefits from this approach
16
CNProcessing3
Familiarity
9pDescribe the levels of parallelism including task, data, and event parallelism.
17
CNProcessing3
Assessment
10pCompare and contrast parallel programming paradigms recognizing the strengths and weaknesses of each.
18
CNProcessing3Usage12pDesign, code, test and debug programs for a parallel computation.
19
GVBasic Rendering3
Familiarity
2pDescribe the basic graphics pipeline and how forward and backward rendering factor in this.
20
HCCollaboration and communication3
Familiarity
1dDescribe the differences between synchronous and asynchronous communication
21
IASDefensive Programming1Usage4pDemonstrate using a high-level programming language how to prevent a race condition from occurring and how to handle an exception
22
IASNetwork Security2
Familiarity
1dDescribe the different categories of network threats and attacks
23
IASNetwork Security2
Familiarity
3dDescribe virtues and limitations of security technologies at each layer of the network stack
24
IASSecurity Policy and Governance3
Familiarity
7dUnderstand the risks and benefits of outsourcing to the cloud
25
IAS Web Security3
Familiarity
1dUnderstand the browser security model including same-origin policy and threat models in web security
26
IAS Web Security3
Familiarity
2dUnderstand the concept of web sessions, secure communication channels such as TLS and importance of secure certificates, authentication including single sign-on such as OAuth and SAML
27
IAS Web Security3Usage3dUnderstand common types of vulnerabilities and attacks in web applications and defenses against them.
28
IAS Web Security3Usage4dUnderstand how to use client-side security capabilities
29
IAS Web Security3Usage5dUnderstand how to use server-side security tools.
30
IMDistributed Databases3
Familiarity
1dExplain the techniques used for data fragmentation, replication, and allocation during the distributed database design process
31
IMDistributed Databases3
Assessment
2dEvaluate simple strategies for executing a distributed query to select the strategy that minimizes the amount of data transfer
32
IMDistributed Databases3
Familiarity
3dExplain how the two-phase commit protocol is used to deal with committing a transaction that accesses databases stored on multiple nodes
33
IMDistributed Databases3
Familiarity
4dDescribe distributed concurrency control based on the distinguished copy techniques and the voting method
34
IMDistributed Databases3
Familiarity
5dDescribe the three levels of software in the client-server model
35
IMInformation Management Concepts2
Familiarity
12dapproaches that scale up to globally networked systems
36
IMInformation Storage and Retrieval3Usage4dPerform Internet-based research
37
NCIntroduction1
Familiarity
1dArticulate the organization of the Internet
38
NCIntroduction1
Familiarity
2dList and define the appropriate network terminology
39
NCMobility2
Familiarity
2dDescribe how wireless networks support mobile users
40
NCNetworked Applications1Usage3dImplement a simple client-server socket-based application
41
NCReliable Data Delivery2
Familiarity
1dDescribe the operation of reliable delivery protocols
42
NCReliable Data Delivery2Usage3dDesign and implement a simple reliable protocol
43
NCResource Allocation2
Familiarity
1dDescribe how resources can be allocated in a network
44
NCResource Allocation2
Familiarity
2dDescribe the congestion problem in a large network
45
OSConcurrency2Usage2cDemonstrate the potential run-time problems arising from the concurrent operation of many separate tasks
46
OSConcurrency2
Familiarity
3cSummarize the range of mechanisms that can be employed at the operating system level to realize concurrent systems and describe the benefits of each
47
OSConcurrency2
Familiarity
5cSummarize techniques for achieving synchronization in an operating system (e.g., describe how to implement a semaphore using OS primitives)
48
OSConcurrency2
Familiarity
6cDescribe reasons for using interrupts, dispatching, and context switching to support concurrency in an operating system
49
OSOperating System Principles2
Familiarity
1cDescribe the need for concurrency within the framework of an operating system
50
OSOverview of Operating Systems1
Familiarity
4dDiscuss networked, client-server, distributed operating systems and how they differ from single user operating systems
51
PBDWeb Platforms3Usage1dDesign and Implement a simple web application
52
PBDWeb Platforms3
Familiarity
4dDescribe the differences between Software-as-a-Service and traditional software products
53
PDCloud Computing3
Familiarity
1dDiscuss the importance of elasticity and resource management in cloud computing.
54
PDCloud Computing3Usage4dDeploy an application that uses cloud infrastructure for computing and/or data resources
55
PDCloud Computing3
Familiarity
2pdExplain strategies to synchronize a common view of shared data across a collection of devices
56
PDCommunication and Coordination1Usage1pUse mutual exclusion to avoid a given race condition
57
PDCommunication and Coordination2
Familiarity
2cGive an example of an ordering of accesses among concurrent activities that is not sequentially consistent
58
PDCommunication and Coordination2Usage5cWrite a program that correctly terminates when all of a set of concurrent tasks have completed
59
PDCommunication and Coordination2Usage6cUse a properly synchronized queue to buffer data passed among activities
60
PDCommunication and Coordination2
Familiarity
7cExplain why checks for preconditions, and actions based on these checks, must share the same unit of atomicity to be effective
61
PDCommunication and Coordination2Usage8cWrite a test program that can reveal a concurrent programming error; for example, missing an update when two activities both try to increment a variable
62
PDCommunication and Coordination2
Familiarity
9cDescribe at least one design technique for avoiding liveness failures in programs using multiple locks or semaphores
63
PDCommunication and Coordination2
Familiarity
10cDescribe the relative merits of optimistic versus conservative concurrency control under different rates of contention among updates
64
PDCommunication and Coordination2Usage3dGive an example of a scenario in which blocking message sends can deadlock
65
PDCommunication and Coordination2
Familiarity
4dExplain when and why multicast or event-based messaging can be preferable to alternatives
66
PDCommunication and Coordination3Usage12cUse semaphores or condition variables to block threads until a necessary precondition holds
67
PDDistributed Systems3
Familiarity
1dDistinguish network faults from other kinds of failures
68
PDDistributed Systems3
Familiarity
2dExplain why synchronization constructs such as simple locks are not useful in the presence of distributed faults
69
PDDistributed Systems3Usage3dGive examples of problems for which consensus algorithms such as leader election are required
70
PDDistributed Systems3Usage4dWrite a program that performs any required marshalling and conversion into message units, such as packets, to communicate interesting data between two hosts
71
PDDistributed Systems3Usage5dMeasure the observed throughput and response latency across hosts in a given network
72
PDDistributed Systems3
Familiarity
6dExplain why no distributed system can be simultaneously consistent, available, and partition tolerant
73
PDDistributed Systems3Usage7dImplement a simple server -- for example, a spell checking service
74
PDDistributed Systems3
Familiarity
8dExplain the tradeoffs among overhead, scalability, and fault tolerance when choosing a stateful v. stateless design for a given service
75
PDDistributed Systems3
Familiarity
9dDescribe the scalability challenges associated with a service growing to accommodate many clients, as well as those associated with a service only transiently having many clients
76
PDFormal Models and Semantics3Usage1cModel a concurrent process using a formal model, such as pi-calculus
77
PDFormal Models and Semantics3
Familiarity
2cExplain the characteristics of a particular formal parallel model
78
PDFormal Models and Semantics3Usage3cFormally model a shared memory system to show if it is consistent
79
PDFormal Models and Semantics3Usage4cUse a model to show progress guarantees in a parallel algorithm
80
PDFormal Models and Semantics3Usage5cUse formal techniques to show that a parallel algorithm is correct with respect to a safety or liveness property
81
PDFormal Models and Semantics3Usage6cDecide if a specific execution is linearizable or not
82
PDParallel Algorithms, Analysis, and Programming2Usage2pCompute the work and span, and determine the critical path with respect to a parallel execution diagram
83
PDParallel Algorithms, Analysis, and Programming2
Familiarity
3pDefine “speed-up” and explain the notion of an algorithm’s scalability in this regard
84
PDParallel Algorithms, Analysis, and Programming2Usage4pIdentify independent tasks in a program that may be parallelized
85
PDParallel Algorithms, Analysis, and Programming2
Familiarity
5pCharacterize features of a workload that allow or prevent it from being naturally parallelized
86
PDParallel Algorithms, Analysis, and Programming2Usage6pImplement a parallel divide-and-conquer and/or graph algorithm and empirically measure its performance relative to its sequential analog
87
PDParallel Algorithms, Analysis, and Programming3
Familiarity
8dProvide an example of a problem that fits the producer-consumer paradigm
88
PDParallel Algorithms, Analysis, and Programming3
Familiarity
10dIdentify issues that arise in producer-consumer algorithms and mechanisms that may be used for addressing them
89
PDParallel Algorithms, Analysis, and Programming3
Familiarity
9pdGive examples of problems where pipelining would be an effective means of parallelization
90
PDParallel Architecture1
Familiarity
1dExplain the differences between shared and distributed memory
91
PDParallel Architecture2
Familiarity
2pDescribe the SMP architecture and note its key features
92
PDParallel Architecture2
Familiarity
3pCharacterize the kinds of tasks that are a natural match for SIMD machines
93
PDParallel Architecture3
Familiarity
6dDescribe the key features of different distributed system topologies
94
PDParallel Architecture3
Familiarity
5pDescribe the challenges in maintaining cache coherence
95
PDParallel Architecture3
Familiarity
4pdExplain the features of each classification in Flynn’s taxonomy
96
PDParallel Decomposition1Usage1pExplain why synchronization is necessary in a specific parallel program
97
PDParallel Decomposition2Usage2pWrite a correct and scalable parallel algorithm
98
PDParallel Decomposition2Usage3pParallelize an algorithm by applying task-based decomposition
99
PDParallel Decomposition2Usage4pParallelize an algorithm by applying data-parallel decomposition
100
PDParallel Performance3Usage1pCalculate the implications of Amdahl’s law for a particular parallel algorithm
Loading...