Loading...
19 results
Search Results
Now showing 1 - 10 of 19
- QuickFaaS: providing portability and interoperability between FaaS PlatformsPublication . Rodrigues, Pedro; Freitas, Filipe; Simão, JoséServerless computing hides infrastructure management from developers and runs code on-demand automatically scaled and billed during the code's execution time. One of the most popular serverless backend services is called Function-as-a-Service (FaaS), in which developers are often confronted with cloud-specific requirements. Function signature requirements, and the usage of custom libraries that are unique to cloud providers, were identified as the two main reasons for portability issues in FaaS applications, leading to various vendor lock-in problems. In this work, we define three cloud-agnostic models that compose FaaS platforms. Based on these models, we developed QuickFaaS, a multi-cloud interoperability desktop tool targeting cloud-agnostic functions and FaaS deployments. The proposed cloud-agnostic approach enables developers to reuse their serverless functions in different cloud providers with no need to change code or install extra software. We also provide an evaluation that validates the proposed solution by measuring the impact of a cloud-agnostic approach on the function's performance, when compared to a cloud-non-agnostic one. The study shows that a cloud-agnostic approach does not significantly impact the function's performance.
- Locality-aware GC optimisations for big data workloadsPublication . Patrício, Duarte; Bruno, Rodrigo; Simão, José; Ferreira, Paulo; Veiga, LuísMany Big Data analytics and IoT scenarios rely on fast and non-relational storage (NoSQL) to help processing massive amounts of data. In addition, managed runtimes (e.g. JVM) are now widely used to support the execution of these NoSQL storage solutions, particularly when dealing with Big Data key-value store-driven applications. The benefits of such runtimes can however be limited by automatic memory management, i.e., Garbage Collection (GC), which does not consider object locality, resulting in objects that point to each other being dispersed in memory. In the long run this may break the service-level of applications due to extra page faults and degradation of locality on system-level memory caches. We propose, LAG1 (short for Locality-Aware G1), na extension of modern heap layouts to promote locality between groups of related objects. This is done with no previous application profiling and in a way that is transparent to the programmer, without requiring changes to existing code. The heap layout and algorithmic extensions are implemented on top of the Garbage First (G1) garbage collector (the new by-default collector) of the HotSpot JVM. Using the YCSB benchmarking tool to benchmark HBase, a well-known and widely used Big Data application, we show negligible overhead in frequent operations such as the allocation of new objects, and significant improvements when accessing data, supported by higher hits in system-level memory structures.
- Towards a hardware-in-the-loop quantum optical ground station simulator and testbedPublication . Niehus, Manfred; Simão, José; Silva, João Castanheira da; Serrador, António; Carvalho, João M.; Gonçalves Cavaco Mendes, Mário JoséA single pass Bennett-Brassard 1984 protocol quantum key distribution downlink from a small quantum satellite in low earth orbit to an optical ground station is studied with the objective to develop the framework for a hardware-in-the-loop quantum optical ground station simulator and testbed.
- SmartGC: online memory management prediction for PaaS Cloud ModelsPublication . Simão, José; Esteves, Sérgio; Veiga, LuísIn Platform-as-a-Service clouds (public and private) an efficient resource management of several managed runtimes involves limiting the heap size of some VMs so that extra memory can be assigned to higher priority workloads. However, this should not be done in an application-oblivious way because performance degradation must be minimized. Also, each tenant tends to repeat the execution of applications with similar memory-usage patterns, giving opportunity to reuse parameters known to work well for a given workload. This paper presents SmartGC, a system to determine, at runtime, the best values for critical heap management parameters of JVMs. SmartGC comprises two main phases: (1) a training phase where it collects, with different heap resizing policies, representative execution metrics during the lifespan of a workload; and (2) an execution phase where it matches the execution parameters of new workloads against those of already seen workloads, and enforces the best heap resizing policy. Distinctly from other works, this is done without a previous analysis of unknown workloads. Using representative applications, we show that our approach can lead to memory savings, even when compared with a state-of-the-art virtual machine - OpenJDK. Furthermore, we show that we can predict with high accuracy the best heap policy in a relatively short period of time and with a negligible runtime overhead. Although we focus on the heap resizing, this same approach could also be used to adapt other parameters or even the GC algorithm.
- NGSPipes: fostering reproducibility and scalability in biosciencesPublication . Dantas, Bruno; Fleitas, Camenelias; Almeida, Alexandre; Forja, João; Francisco, Alexandre; Simão, José; Vaz, CátiaBiosciences have been revolutionised by NGS technologies in last years, leading to new perspectives in medical, industrial and environmental applications. And although our motivation comes from biosciences, the following is true for many areas of science: published results are usually hard to reproduce, delaying the adoption of new methodologies and hindering innovation. Even if data and tools are freely available, pipelines for data analysis are in general barely described and their setup is far from trivial. NGSPipes addresses these issues reducing the efforts necessary to define, build and deploy pipelines, either at a local workstation or in the cloud. NGSPipes framework is freely available at http://ngspipes.github.io/.
- FairCloud: truthful cloud scheduling with continuous and combinatorial auctionsPublication . Fonseca, Artur; Simão, José; Veiga, LuísWith Cloud Computing, access to computational resources has become increasingly facilitated and applications could offer improved scalability and availability. The datacenters that support this model have a huge energy consumption and a limited pricing model. One way of improving energy efficiency is by reducing the idle time of resources - resources are active but serve a limited useful business purpose. This can be done by improving the scheduling across datacenters. We present FairCloud, a scalable Cloud-Auction system that facilitates the allocation by allowing the adaptation of VM requests (through conversion to other VM types and/or resource capping - degradation), depending on the User profile. Additionally, this system implements an internal reputation system, to detect providers with low Quality of Service (QoS). FairCloud was implemented using CloudSim and the extensions CloudAuctions. FairCloud was tested with the Google Cluster Data. We observed that we achieved more quality in the requests while maintaining the CPU Utilization. Our reputation mechanism proved to be effective by lowering the Order on the Providers with lower quality.
- Monitorização do processo de condução e alertas baseados no contextoPublication . Almeida, Pedro L.; Schäfer, Tiago; Lourenço, André Ribeiro; Simão, JoséMotivação e visão geral do sistema: A fadiga é considerada como um dos principais fatores responsável pela sinistralidade rodoviária. Uma agência norte-americana estimou, em 2013, que condutores com sonolência causada pela fadiga provocaram mais de 70 mil acidentes, resultando mais de 40 mil feridos e cerca de 800 mortos. 4 Tipicamente, para realizar a deteção destes aspetos é realizado processamento de imagem, como por exemplo, da retina. Porém, é possível utilizar outras abordagens, nomeadamente os sinais fisiológicos como é feito pelo Cardiowheel. O CardioWheel é um sistema embebido integrável em automóveis, tem como objetivo detetar automaticamente estados de fadiga e a identidade biométrica. Para tal é medido no volante um elemento biométrico do condutor, o sinal cardíaco, e emitidos alertas de fadiga para o exterior através de comunicações por GPRS.
- A checkpointing-enabled and resource-aware Java Virtual Machine for efficient and robust e-Science applications in grid environmentsPublication . Simão, José; Garrochinho, Tiago; Veiga, LuisObject-oriented programming languages presently are the dominant paradigm of application development (e. g., Java,. NET). Lately, increasingly more Java applications have long (or very long) execution times and manipulate large amounts of data/information, gaining relevance in fields related with e-Science (with Grid and Cloud computing). Significant examples include Chemistry, Computational Biology and Bio-informatics, with many available Java-based APIs (e. g., Neobio). Often, when the execution of such an application is terminated abruptly because of a failure (regardless of the cause being a hardware of software fault, lack of available resources, etc.), all of its work already performed is simply lost, and when the application is later re-initiated, it has to restart all its work from scratch, wasting resources and time, while also being prone to another failure and may delay its completion with no deadline guarantees. Our proposed solution to address these issues is through incorporating mechanisms for checkpointing and migration in a JVM. These make applications more robust and flexible by being able to move to other nodes, without any intervention from the programmer. This article provides a solution to Java applications with long execution times, by extending a JVM (Jikes research virtual machine) with such mechanisms. Copyright (C) 2011 John Wiley & Sons, Ltd.
- Runtime object lifetime profiler for latency sensitive big data applicationsPublication . Rodrigo, Bruno; Patrício, Duarte; Simão, José; Veiga, Luís; Ferreira, PauloLatency sensitive services such as credit-card fraud detection and website targeted advertisement rely on Big Data platforms which run on top of memory managed runtimes, such as the Java Virtual Machine (JVM). These platforms, however, suffer from unpredictable and unacceptably high pause times due to inadequate memory management decisions (e.g., allocating objects with very different lifetimes next to each other, resulting in severe memory fragmentation). This leads to frequent and long application pause times, breaking Service Level Agreements (SLAs). This problem has been previously identified, and results show that current memory management techniques are ill-suited for applications that hold in memory massive amounts of long-lived objects (which is the case for a wide spectrum of Big Data applications). Previous works reduce such application pauses by allocating objects in off-heap, in special allocation regions/generations, or by using ultra-low latency Garbage Collectors (GC). However, all these solutions either require a combination of programmer effort and knowledge, source code access, offline profiling (with clear negative impacts on programmer's productivity), or impose a significant impact on application throughput and/or memory to reduce application pauses. We propose ROLP, a Runtime Object Lifetime Profiler that profiles application code at runtime and helps pretenuring GC algorithms allocating objects with similar lifetimes close to each other so that the overall fragmentation, GC effort, and application pauses are reduced. ROLP is implemented for the OpenJDK 8 and was evaluated with a recently proposed open-source pretenuring collector (NG2C). Results show long tail latencies reductions of up to 51% for Lucene, 85% for GraphChi, and 69% for Cassandra. This is achieved with negligible throughput (< 6%) and memory overhead, with no programmer effort, and no source code access.
- Distributed and decentralized orchestration of containers on edge cloudsPublication . Pires, André; Simão, José; Veiga, LuísCloud Computing has been successful in providing substantial amounts of resources to deploy scalable and highly available applications. However, there is a growing necessity of lower latency services and cheap bandwidth access to accommodate the expansion of IoT and other applications that reside at the internet's edge. The development of community networks and volunteer computing, together with the today's low cost of compute and storage devices, is making the internet's edge filled with a large amount of still underutilized resources. Due to this, new computing paradigms like Edge Computing and Fog Computing are emerging. This work presents Caravela a Docker's container orchestrator that utilizes volunteer edge resources from users to build an Edge Cloud where it is possible to deploy applications using standard Docker containers. Current cloud solutions are mostly tied to a centralized cluster environment deployment. Caravela employs a completely decentralized architecture, resource discovery and scheduling algorithms to cope with (1) the large amount of volunteer devices, volatile environment, (2) wide area networks that connects the devices and (3) nonexistent natural central administration.