Colloquium Archive

Past, Present, And Future Parallel Programming Paradigms

Rebecca Hartman-Baker, NERSC Division, Lawrence Berkeley Laboratory

03/14/2016

Parallel programming paradigms have been adapted to ever-changing computing resources, needs and goals over the course of history. The evolution of high-end computer architecture (from organic to mechanical to digital and from single-core computers to vector-based, multicore-based, and hybrid CPU/GPGPU-based machines) has necessitated the continuing development of new parallel programming paradigms and numerical algorithms. In this talk, I discuss the development of numerical algorithms throughout the history of numerical computing placed within their historical and architectural contexts, and the implications of future architectures on numerical methods.

Technical Considerations For Vr

Jason Shankel, CTO, Wildstop

04/07/2016

Virtual reality headsets are poised to become the next breakthrough in human/computer interface technology. I will outline the unique challenges involved in supporting VR rendering and headset input.

Helping Software Exploit Hardware

Kelly A. Shaw, University of Richmond, Virginia

04/14/2016

In order to gain improved performance and reduced power consumption, computer architects create specialized hardware geared for specific computations. Graphics Processing Units (GPUs) are specialized processors designed for applications with large amounts of regular and parallel computation. While these specialized processors offer performance and power benefits, these advantages come at a cost. These devices are frequently less well understood and more difficult to program than general-purpose processors. In this talk, I present two approaches we created to help GPU software developers. Starchart provides developers with a tool that enables them to systematically and quickly understand how to tune important characteristics of their applications. The second approach, called MRPB, automatically prioritizes and reorders memory accesses to increase the benefits obtained from data caching in GPUs.

How To Find A New Largest Known Prime

Landon Curt Noll

04/21/2016

The quest to discover a new largest known prime has been ongoing for centuries. Those seeking to break the record for the largest known prime have pushed the bounds of computing. We have come a long way since 1978 when Landon's record breaking 6533-digit prime was discovered (www.isthe.com/chongo/tech/math/prime/m21701.html). Today’s largest known prime (www.isthe.com/chongo/tech/math/prime/mersenne.html#largest) is almost 13 million digits long! To encourage the discovery of ever-larger primes, awards of $150,000 and $250,000 are offered (https://www.eff.org/awards/coop) to the first published proof of a discovery of a prime of at least 100 million and 1 billion digits respectively. The search for the largest known prime requires writing and running code that must run to completion, without any errors. Because it takes a very long time to run to completion (several thousand hours in many cases), the code MUST RUN CORRECTLY the very first time! A significant QA effort is required to write 100% error-free code. Moreover considerable effort must be put into fault tolerant coding and recovery from the eventual operating system and hardware errors that will arise. The record goes neither to the fastest coder nor to the person with the fastest hardware but rather to the first result that is proven to be correct. How are these large primes discovered? What are some of the best ways to find a new world record.sized prime number? These and other prime questions will be explored. We will examine software and hardware based approaches and will look at code fragments and hardware machine state diagrams. NOTE: Knowledge of advanced mathematics is NOT required for this talk.

Efficient And Effective Large-Scale Textual Search

Anagha Kulkarni, Computer Science Department, San Francisco State University

04/28/2016

The traditional search solutions for large datasets assume access to practically unlimited computational resources, and thus cannot be employed by small-scale organizations. Our work introduces Selective Search, a new retrieval approach that processes large volumes of data efficiently and effectively in computationally constrained environments. To achieve this, Selective Search, partitions the dataset into subsets (shards) in such a way that at query execution time only a few selected shards need to be searched for a query. The dataset is divided into shards based on the similarity of the documents, thus creating topically homogenous partitions (e.g. politics, sports, technology, and finance). This topic-based organization of the dataset concentrates the relevant documents for a query into a few shards. During query evaluation a few shards, that are likely to contain the relevant documents for the query, are identified and searched. Empirical evaluation using some of the largest available datasets (e.g. half a billion web pages) demonstrates that Selective Search reduces search costs dramatically without degrading search effectiveness, and operationalizes this using very few computational resources.

Pages