Dependable Distributed Systems

The goal is to advance the state of the art in dependable distributed data management and dependable distributed systems, in particular (i) developing replication protocols based on group communication for different circumstances, with performance and availability as main concerns; (ii) supporting optimistic replication with novel logical clock mechanisms; (iii) scaling data management in a cloud computing environment; and (iv) developing agreement protocols and highly scalable information dissemination/aggregation protocols.

  • Rui Carlos Oliveira
  • José Orlando Pereira
  • António Luís Sousa
  • Paulo Sérgio Almeida
  • Carlos Baquero
  • Victor Francisco Fonte

Information Security

HASLab researchers develop techniques for constructing formal arguments of security for cryptographic schemes and protocols, in particular (i) formalizing security models (goals and attack scenarios) suitable for practical applications; (ii) constructing security arguments to support new and existing protocols that are used in the real world without theoretical validation; (iii) developing formal verification tools that can be used to mechanically check theoretical security proofs at the highest levels of assurance. Enabling the use of cryptography in new application scenarios and computing paradigms, both in terms of the necessary functionality and efficient implementation, by pursuing the development of domain-specific software development tools for cryptography.

  • José Bacelar Almeida
  • Manuel Bernardo Barbosa
  • José Esgalhado Valença

Fault Detection

Statistics-based approaches take as their only input dynamic information collected at run-time (such as component involvement in the execution of a test case), using no prior knowledge of the system under analysis. Such approaches have therefore intrinsically no modeling costs attached. Amongst the best statistical approaches in terms of diagnostic cost/performance ratio are those based on the computation of a statistical similarity between component activity and failure behavior using a so-called similarity coefficient s, which we refer to as spectrum-based fault localization (SFL). Component activity is recorded as program spectra (collected in a matrix A), and information on whether each of the program spectra in A corresponds to a passed of failed execution is collected in a vector e. The diagnostic process can be explained as (A, e) ---s---> D, where D is the yielded diagnostic report. SFL is a light-weight approach since for each component only a similarity coefficient (scalar operation) has to be computed, after which the M components in the system are sorted into D in order of likelihood to be at fault. Besides, dynamically collecting (A, e) requires only marginal overhead.

  • Rui Maranhão Abreu

Interactive Systems Modelling and Analysis

blah blah

  • José Creissac Campos
  • António Nestor Ribeiro

Program Verification

blah blah

  • José Bernardo Barros
  • Maria João Frade
  • Jorge Sousa Pinto

PJA things

blah blah

  • Paulo Jorge Azevedo

JAS things

blah blah

  • João Alexandre Saraiva

JNO things

blah blah

  • José Nuno Oliveira

LSB things

blah blah

  • Luís Soares Barbosa

NFR things

blah blah

  • Nuno Rodrigues

-- JorgeSousaPinto - 11 May 2011