Dienstag, 22. Juni 2010

[HIForum] [Kolloquium] Vortrag am 28.6.10 - Prof. Dr. Stephan Olbrich

Einladung zum

Informatischen Kolloquium Hamburg


Montag, 28. Juni 2010
um 17:15 Uhr
Vogt-Kölln-Straße 30
Konrad-Zuse-Hörsaal
Gebäude B

Prof. Dr.-Ing. Stephan Olbrich
Universität Hamburg
Direktor des Regionalen Rechenzentrums


*Scalable In-Situ Data Extraction and Distributed Visualization*

Abstract:

In the last few years, the data analysis and visualization aspect has dramatically gained in importance, since this part of the complete process chain is much more difficult to scale than the numerical cores of simulation models. 3D presentation of results of scientific computing - especially taking advantage of highly interactive virtual reality environments - has become feasible using low-cost equipment such as 3D monitors or TV sets and advanced 3D graphics cards, where the development was driven from the consumer market. In computational fluid dynamics typically 3D grids consisting of up to 1011 data points on 4000 cores can be simulated, which results in a non-stationary scenario (~104 time steps) in ~10 Petabyte raw result data. Since such an amount of data cannot be transferred or stored or explored using traditional approaches of separate post-processing, one topic of world-wide research is the development of tools to integrate data extraction in the simulation software, so-called "in-situ data extraction", and to take advantage of distributed systems for remote visualization.

We have developed a visualization middleware, which implements parallel in-situ data extraction by providing a programming library in order to minimize the sequential bottlenecks by parallelization of visualization mapping methods and to reduce the data volume by storing polygons and lines instead of raw data. Supporting synchronous, on-demand 3D presentation and interaction scenarios under bandwidth and rendering performance constraints, and nevertheless limiting the frame update time to get interactive rates, requires flexible and efficient reduction and post-filtering techniques. For this purpose, our data extraction library supports MPI-based computing environments and encapsulates a parallel implementation of vertex cluster based simplified isosurfaces, and parallel extraction of property-enhanced pathlines. These pathlines can be interactively post-filtered as part of a specialized, so-called "3D streaming server", which combines storage, filtering, and play-out of sequences of 3D scenes as a 3D movie, which can be navigated in real-time.

Kontakt: Prof. Dr. Christopher Habel
habel@informatik.uni-hamburg.de, Tel. 42883-2417

---------------------------------------------------------------

Termine unter: http://www.informatik.uni-hamburg.de/Info/Kolloquium/


_______________________________________________
Kolloquium mailing list
Kolloquium@mailhost.informatik.uni-hamburg.de
https://mailhost.informatik.uni-hamburg.de/mailman/listinfo/kolloquium
_______________________________________________
Hiforum-verteiler mailing list
Hiforum-verteiler@informatik.uni-hamburg.de
https://mailhost.informatik.uni-hamburg.de/mailman/listinfo/hiforum-verteiler

Dienstag, 15. Juni 2010

[HIForum] [Kolloquium] Vortrag am 21.6.10 - Prof. Dr. J. Hertzberg

Einladung zum

Informatischen Kolloquium Hamburg


Montag, 21. Juni 2010
um 17 Uhr c.t.
Vogt-Kölln-Straße 30
Konrad-Zuse-Hörsaal
Gebäude B

Prof. Dr. Joachim Hertzberg
Universität Osnabrück
Institut für Informatik

*On Affordances and Plan-Based Robot Control*

Affordances as of psychologist J.J. Gibson keep surfacing in the
robotics literature as an approach or a metaphor for modeling the
essential real-time coupling between the steady flow of sensor data
and action control. Affordances in Gibson's sense are possibilities of
action for an agent that the agent is able to perceive directly in its
environment. Their usage in the robotics literature is correlated with
reactive or behavior-based control and with learning approaches on a
relatively low level of sensor and actuator data.

In our work, we assume that the action part of an affordance is
implemented on a robot as a closed-loop control routine -- this is
much in a Gibsonian spirit. Much against Gibson's ideas, we also
assume that affordances may be represented and reasoned about, and
perceived affordance tokens may be stored in a map. Moreover, an
aspect of affordance representations is that the afforded action may
be cast into an action description that is part of the action
repertoire of some classical planning domain description -- here,
Gibson would strongly object. However, in that way actions expected to
be afforded in some region of the environment may be put into symbolic
action plans and made a robust part of plan-based robot control. The
talk will explain the basic ideas, present some demo examples and
experiments of its usage, and discuss the potential of the approach.


Kontakt: Prof. Dr. Jianwei Zhang
zhang@informatik.uni-hamburg.de, Tel. 42883-2431


---------------------------------------------------------------

Termine unter: http://www.informatik.uni-hamburg.de/Info/Kolloquium/


_______________________________________________
Kolloquium mailing list
Kolloquium@mailhost.informatik.uni-hamburg.de
https://mailhost.informatik.uni-hamburg.de/mailman/listinfo/kolloquium
_______________________________________________
Hiforum-verteiler mailing list
Hiforum-verteiler@informatik.uni-hamburg.de
https://mailhost.informatik.uni-hamburg.de/mailman/listinfo/hiforum-verteiler

Mittwoch, 2. Juni 2010

[HIForum] [Kolloquium] am 7.6.10 - JProf. Dr. Pia Knoeferle

Einladung zum

Informatischen Kolloquium Hamburg


Montag, 7. Juni 2010
um 17 Uhr c.t.
Vogt-Kölln-Straße 30
Konrad-Zuse-Hörsaal
Gebäude B


J.Prof. Dr. Pia Knoeferle
CITEC (Cognitive Interaction Technology, Center of Excellence)
Universität Bielefeld

*Visual context influences on online language comprehension: individual and time course differences*

Existing studies have shown that visual context information can rapidly influence sentence comprehension in real time. And yet, many aspects relating to the time course of its influence on comprehension are little studied. Results from a first event-related brain potential (ERPs) experiment contribute towards re-establishing picture-sentence verification - discredited possibly for its over-reliance on post- sentence response time (RT) measures - as a task for situated comprehension. Employing this paradigm, I will then report ERP results that suggest there is no one-size-fits-all of visual context effects. Rather, the time course of visual context effects differs as a function of participants' working memory resources and as a function of which aspects of a scene mismatch sentence input.


Kontakt: Prof. Dr. Christopher Habel
habel@informatik.uni-hamburg.de, Tel. 42883-2417


---------------------------------------------------------------

Termine unter: http://www.informatik.uni-hamburg.de/Info/Kolloquium/


_______________________________________________
Kolloquium mailing list
Kolloquium@mailhost.informatik.uni-hamburg.de
https://mailhost.informatik.uni-hamburg.de/mailman/listinfo/kolloquium
_______________________________________________
Hiforum-verteiler mailing list
Hiforum-verteiler@informatik.uni-hamburg.de
https://mailhost.informatik.uni-hamburg.de/mailman/listinfo/hiforum-verteiler