YouTube Icon

Interview Questions.

Top 100+ Parallel Computer Architecture Interview Questions And Answers - Jun 01, 2020

fluid

Top 100+ Parallel Computer Architecture Interview Questions And Answers

Question 1. What Is Shared Memory Architecture?

Answer :

A unmarried address space is visible to all execution threads.

Question 2. What Is Numa Memory Architecture?

Answer :

NUMA stands for NonUniform reminiscence access and is a special form of shared reminiscence structure wherein get entry to times to distinct reminiscence locations with the aid of a processor may additionally vary as may additionally get right of entry to instances to the same memory vicinity via special processors. 

Python Interview Questions
Question three. Name Some Network Architectures Prevalent In Machines Supporting The Message Passing Paradigm?

Answer :

Ethernet, Infiniband, Tree

Question four. What Is Dataparallel Computation?

Answer :

Data is partitioned across parallel execution threads, each of which carry out a few computation on its partition typically impartial of other threads.

Python Tutorial
Question 5. What Is Taskparallel Computation?

Answer :

The parallelism manifests across features. A set of functions need to compute, which may additionally or may not have order constraints amongst them.

C++ Interview Questions
Question 6. What Is Tasklatency?

Answer :

The time taken for a assignment to finish on the grounds that a request for it is made.

Question 7. What Is Speedup?

Answer :

The ratio of a few performance metric (like latency) acquired the usage of a unmarried processor with that acquired the usage of a hard and fast of parallel processors.

C++ Tutorial Artificial Intelligence Interview Questions
Question eight. What Is Parallel Efficiency?

Answer :

The Speedup per processor

Question nine. What Is An Inherently Sequential Task?

Answer :

On whose maximum speedup (using any variety of processors) is 1.

Computer Graphics Interview Questions
Question 10. What Is The Memory Consistency Model Supported By Openmp?

Answer :

There is not any “guaranteed” sharing/consistency of shared variables until a flush is referred to as. Flush units that overlap are sequentially steady and the writes of a variable grow to be visible to every different thread at the point flush is serialized. This is slightly weaker than “susceptible consistency.”

Artificial Intelligence Tutorial
Question 11. How Are Threads Allocated To Processors When There Are More Threads Than The Number Of Processors?

Answer :

as soon as a thread is completed on a center, a new thread is run on it. The order may be managed using the “Schedule” clause.

Compiler Design Interview Questions
Question 12. What Is The Maximum Time Speedup Possible According To Amdahl's Law?

Answer :

1/f, where f is inherently sequential fraction of the time taken by way of the excellent sequential execution of the mission.

Python Interview Questions
Question 13. What Is Simd?

Answer :

A elegance belonging to Flynn’s taxonomy of parallel architectures, it stands for unmarried guidance multiple information. In this architecture, unique processing elements all execute the identical practise in a given clock cycle, with the respective records (e.G., in registers) being unbiased of each different.

Computer Graphics Tutorial
Question 14. What Is A Hypercube Connection?

Answer :

A unmarried node is a hypercube. An n node hypercube is made from two n/2 node hypercube, with their corresponding nodes linked to each other.

Question 15. What Is The Diameter Of An Nnode Hypercube?

Answer :

log n. The diameter is the minimum variety of links required to reach two furthest nodes. 

Computer structure Interview Questions
Question sixteen. How Does Open Mp Provide A Shared Memory Programming Environment?

Answer :

OpenMP makes use of pragmasto manage automatic creation of threads. Since the thread percentage the cope with space, they share memory. However, they are allowed a neighborhood view of the shared variables through “non-public” variables. The compiler allocates a variablecopy for each thread and optionally initializes them with the authentic variable. Within the thread the references to private variable are statically changed to the new variables.

Compiler Design Tutorial
Question 17. What Is Common Crcw Pram?

Answer :

Parallel Random Access Model of Computation in which the processors can write to a not unusual reminiscence deal with within the identical step, as long as they're all writing the identical value.

Synchronized Multimedia Integration Language (SMIL) Interview Questions
Question 18. What Is The Impact Of Limiting Pram Model To A Fixed Number Of Processors Or A Fixed Memory Size?

Answer :

Prams with higher capacities may be simulated can be simulated (with linear slowdown).

C++ Interview Questions
Question 19. What Is The Impact Of Eliminating Shared Write From Pram?

Answer :

It can be simulated through group pram with a log n element inside the time. However, the algorithms in this model can grow to be a bit complicated, as they should make sure conflict loose writes.

Question 20. What Is The Significance Of Work Complexity Analysis?

Answer :

Time complexity does no longer account for the scale of the system. Work complexity is extra reflective of practical performance. Worktime scheduling precept describes the predicted time for a p processor pram as work/p.

X86 Interview Questions
Question 21. What Does Bulk Synchronous Model Add To Pram For Parallel Algorithm Analysis?

Answer :

Pram assumes steady time get admission to to shared reminiscence, which is unrealistic. Bsp counts time in "message communique" and on this version a step isn't initiated until the enter statistics has arrived.

Question 22. Is It True That All Nc Problems Parallelize Well?

Answer :

In popular NC troubles do parallelize properly in terms of getting a poly log answer in pram version whilst it most effective has a outstanding log solution in ram version. However, for problems with polylog answer in ram models, there won't be an powerful speedup.

Question 23. Is User Locking Required To Control The Order Of Access To Guarantee Sequential Consistency?

Answer :

Sequential consistency is independent of user locking but does require delaying of reminiscence operations on the gadget stage. Precise ordering of operations want no longer be preordained by way of this system logic. There simply ought to exist a worldwide ordering that is consistent with the local view located with the aid of every processor

Multimedia compression Interview Questions
Question 24. What Is Pipelining?

Answer :

Pipelining is a process wherein the information is accessed in a level by means of level system. The information is accessed in a sequence that is each degree performs an operation. If there are n quantity of ranges then n number of operations is carried out. To increase the throughput of the processing network the pipe lining procedure is carried out. This technique is adopted because the operation or the records is accessed in a chain with a fast mode.

Artificial Intelligence Interview Questions
Question 25. What Is Cache?

Answer :

Cache is a part that transparently stores statistics so that destiny requests for that records can be served quicker. The statistics that is saved inside a cache is probably values that have been computed in advance or duplicates of authentic values which might be saved someplace else. If requested statistics is contained within the cache (cache hit), this request may be served through simply reading the cache, which is relatively faster.

Otherwise (cache miss), the data has to be recomputed or fetched from its unique garage region, which is relatively slower. Hence, the greater requests may be served from the cache the quicker the overall device overall performance is. 

Caching is regularly taken into consideration as a overall performance-enhancement device than a way to store utility statistics. If u spends more server resources in getting access to the same statistics time and again, use caching alternatively. Caching facts can carry massive performance benefits, so whenever u discover that u need to frequently get entry to information that doesn’t often exchange, cache it in the cache item and your software's overall performance will enhance

Question 26. What Is Write Back And Write Through Caches?

Answer :

write-back cache a caching technique in which changes to records within the cache aren't copied to the cache supply till virtually necessary. Write-through cache plays all write operations in parallel -- statistics is written to principal reminiscence and the L1 cache simultaneously.

Write-again caching yields extremely higher performance than write-via caching because it reduces the wide variety of write operations to foremost memory. With this overall performance development comes a moderate chance that facts may be misplaced if the system crashes.

Advanced C++ Interview Questions
Question 27. What Are Different Pipelining Hazards And How Are They Eliminated?

Answer :

Pipeline is a technique in which a business object is going through numerous tiers asynchronously. Where one degree alternatives up strategies and drops it for the next process to pick out up. The threat is whilst the a exclusive thread of the identical system selections up the enterprise item results in malfunction. This may be handled by using popularity managing or scan delays.

Computer Graphics Interview Questions
Question 28. What Are Different Stages Of A Pipe?

Answer :

There are two varieties of pipelines-

Instructional pipeline wherein exclusive levels of an preparation fetch and execution are dealt with in a pipeline.
Arithmetic pipeline are exceptional ranges of an mathematics operation are handled alongside the levels of a pipeline.
Question 29. Explain More About Branch Prediction In Controlling The Control Hazards ?

Answer :

A branch prediction manage device, in an data processing unit which performs a pipeline process, generates a branch prediction cope with used for verification of an practise being speculatively achieved. The branch prediction manipulate device includes a first return deal with garage unit storing the prediction return cope with, a 2d return cope with storage unit storing a go back cope with to be generated depending on an execution end result of the call education, and a department prediction address storage unit sending a saved prediction return deal with as a department prediction deal with and storing the sent branch prediction deal with.

When the branch prediction deal with differs from a go back address, that is generated after executing a branch training or a go back guidance, contents saved in the 2nd return cope with garage unit are copied to the primary return cope with storage unit.

Basic C Interview Questions
Question 30. Give Examples Of Data Hazards With Pseudo Codes.?

Answer :

A chance is an blunders within the operation of the microcontroller, because of the simultaneous execution of more than one degrees in a pipelined processor. There are three sorts of risks: Data hazards, manipulate risks, and structural dangers.

Question 31. How Do You Calculate The Number Of Sets Given Its Way And Size In A Cache?

Answer :

A cache inside the number one storage hierarchy contains cache strains which are grouped into units. If each set contains okay strains then we are saying that the cache is ok-manner associative.

A statistics request has an cope with specifying the region of the requested information. Each cache-line sized chew of facts from the decrease degree can only be positioned into one set. The set that it is able to be placed into relies upon on its cope with. This mapping between addresses and sets ought to have an smooth, speedy implementation. The quickest implementation includes using only a portion of the deal with to select the set.

      When this is executed, a request address is broken up into three parts:

An offset part identifies a particular place within a cache line.
A set component identifies the set that incorporates the asked statistics.
A tag component ought to be saved in every cache line along with its statistics to differentiate exclusive addresses that could be placed within the set.
Question 32. Scoreboard Analysis?

Answer :

Scoreboarding is a centralized approach, used in the CDC 6600 pc, for dynamically scheduling a pipeline in order that the instructions can execute out of order whilst there are no conflicts and the hardware is to be had. In a scoreboard, the statistics dependencies of every practise are logged.

Instructions are released only when the scoreboard determines that there are not any conflicts with formerly issued and incomplete commands. If an coaching is stalled because it's miles dangerous to keep, the scoreboard video display units the glide of executing instructions till all dependencies have been resolved earlier than the stalled coaching is issued.

Question 33. What Is Miss Penalty And Give Your Own Ideas To Eliminate It?

Answer :

The fraction or percent of accesses that result in a hit is known as the hit fee. The fraction or percent of accesses that result in a pass over is known as the omit rate. Hit price + miss price = 1.0 (a hundred%) The difference between decrease level get entry to time and cache get entry to time is called the miss penalty.

Compiler Design Interview Questions
Question 34. How Do You Improve The Cache Performance?

Answer :

Reduce the omit charge,
Reduce the pass over penalty, or
Reduce the time to hit within the cache.
CPU time = (CPU execution clock cycles + Memory stall clock cycles) x clock cycle time attempt stall clock cycles = (Reads x Read pass over price x Read

pass over penalty + Writes x Write pass over rate x Write pass over penalty)

Memory stall clock cycles = Memory accesses x Miss fee x Miss penalty

CPUtime = IC x (CPIexecution + (Mem accesses according to practise x Miss rate x Miss penalty)) x Clock cycle time hits are protected in CPIexecution

Misses consistent with preparation = Memory accesses in line with guidance x Miss rate 

CPUtime = IC x (CPIexecution + Misses per preparation x Miss penalty) x Clock cycle time.

Question 35. Different Addressing Modes.?

Answer :

Addressing modes are an element of the instruction set structure in maximum primary processing unit (CPU) designs. The various addressing modes which are described in a given practise set structure define how gadget language commands in that architecture perceive the operand (or operands) of every education. An addressing mode specifies the way to calculate the powerful memory deal with of an operand with the aid of the usage of records held in registers and/or constants contained inside a system preparation or elsewhere.

Question 36. Computer Arithmetic With Two's Complements.?

Answer :

The 's complement of a binary number is defined as the cost acquired by subtracting the number from a huge strength of  (specially, from 2N  for an N-bit two's complement). The 's complement of the variety then behaves like the bad of the original wide variety in maximum arithmetic, and it is able to coexist with advantageous numbers in a natural manner.

A 's-complement machine or two's-complement arithmetic is a gadget wherein terrible numbers are represented by using the 2's supplement of the absolute fee; this machine is the most commonplace approach of representing signed integers on computer systems. In this type of system, more than a few is negated (transformed from tremendous to bad or vice versa) by using computing its 's complement. An N-bit two's-complement numeral device can represent every integer within the range −2N−1 to +2N−1−1.

Computer architecture Interview Questions
Question 37. About Hardware And Software Interrupts ?

Answer :

Hardware Interrupt:

Each CPU has External Interrupt lines. Other outside gadgets line keyboard, Mouse, Other controllers can ship signals to CPU asynchronously.

Software Interrupt:

is an interrupt generated with in a processor through executing an coaching . Software interrupt are regularly used to implemented device calls due to the fact they applied a subroutine name with a CPU ring level alternate.

Question 38. What Is Bus Contention And How Do You Eliminate It?

Answer :

Bus contention takes place when more than one reminiscence module attempts to access the bus simultaneously. It can be reduced by means of using hierarchical bus architecture




CFG