Browsing by Author "Veiga, Luís"
Now showing 1 - 10 of 12
Results Per Page
Sort Options
- Cloud-supported certification for energy-efficient web browsing and servicesPublication . Avelar, Gonçalo; Simão, José; Veiga, LuísWeb applications are increasingly pushing more computation to the end user. With the proliferation of the software-as-a-service model, major Cloud providers assume browsers as the user agent to access their solutions, taking advantage of recent and powerful web programming client-side technologies. These technologies enhance and revamp web pages’ aesthetics and interaction mechanics. Unfortunately, they lead to increasing energetic impact, proportional to the rate of appearance of more sophisticated browser mechanisms and web content. This work presents GreenBrowsing, which is composed of (i) A Google Chrome extension that manages browser resource usage and, indirectly, energy impact by employing resource-limiting mechanisms on browser tabs; (ii) A certification subsystem that ranks URL and web domains based on web page-induced energy consumption. We show that GreenBrowsing’s mechanisms can achieve substantial resource reduction, in terms of energy-inducing resource metrics like CPU usage, memory usage and variation, up to 80%, for CPU and memory usage. It is also, indirectly and partially, able to reduce bandwidth usage when employing a specific subset of the mechanisms presented. All this is with limited degradation of user experience when compared to browsing the web without the extension.
- Distributed and decentralized orchestration of containers on edge cloudsPublication . Pires, André; Simão, José; Veiga, LuísCloud Computing has been successful in providing substantial amounts of resources to deploy scalable and highly available applications. However, there is a growing necessity of lower latency services and cheap bandwidth access to accommodate the expansion of IoT and other applications that reside at the internet's edge. The development of community networks and volunteer computing, together with the today's low cost of compute and storage devices, is making the internet's edge filled with a large amount of still underutilized resources. Due to this, new computing paradigms like Edge Computing and Fog Computing are emerging. This work presents Caravela a Docker's container orchestrator that utilizes volunteer edge resources from users to build an Edge Cloud where it is possible to deploy applications using standard Docker containers. Current cloud solutions are mostly tied to a centralized cluster environment deployment. Caravela employs a completely decentralized architecture, resource discovery and scheduling algorithms to cope with (1) the large amount of volunteer devices, volatile environment, (2) wide area networks that connects the devices and (3) nonexistent natural central administration.
- EcoVMBroker: energy-aware scheduling for multi-layer datacentersPublication . Fernandes, Rodrigo; Simão, José; Veiga, LuísThe cloud relies on efficient algorithms to find resources for jobs by fulfilling the job's requirements and at the same time optimise an objective function. Utility is a measure of the client satisfaction that can be seen as an objective function maximised by schedulers based on the agreed service level agreement (SLA). We propose EcoVMBroker which can reduce energy consumption by using dynamic voltage frequency scaling (DVFS) and applying reductions of utility, different for classes of users and across ranges of resource allocations. Using efficient data structures and a hierarchical architecture, we created a scalable solution for the fast growing heterogeneous cloud. EcoVMBroker proved that we can delegate work in a hierarchical datacenter, make decisions based on summaries of resource usage collected from several nodes and still be efficient.
- FairCloud: truthful cloud scheduling with continuous and combinatorial auctionsPublication . Fonseca, Artur; Simão, José; Veiga, LuísWith Cloud Computing, access to computational resources has become increasingly facilitated and applications could offer improved scalability and availability. The datacenters that support this model have a huge energy consumption and a limited pricing model. One way of improving energy efficiency is by reducing the idle time of resources - resources are active but serve a limited useful business purpose. This can be done by improving the scheduling across datacenters. We present FairCloud, a scalable Cloud-Auction system that facilitates the allocation by allowing the adaptation of VM requests (through conversion to other VM types and/or resource capping - degradation), depending on the User profile. Additionally, this system implements an internal reputation system, to detect providers with low Quality of Service (QoS). FairCloud was implemented using CloudSim and the extensions CloudAuctions. FairCloud was tested with the Google Cluster Data. We observed that we achieved more quality in the requests while maintaining the CPU Utilization. Our reputation mechanism proved to be effective by lowering the Order on the Providers with lower quality.
- GC-Wise: a self-adaptive approach fo rmemory-performance efficiency in Java VMsPublication . Simão, José; Esteves, S.; Pires, André; Veiga, LuísHigh-level language runtimes are ubiquitous in every cloud deployment. From the geo-distributed heavy resources cloud provider to the new Fog and Edge deployment paradigms, all rely on these runtimes for portability, isolation and resource management. Across these clouds, an efficient resource management of several managed runtimes involves limiting the heap size of some VMs so that extra memory can be assigned to higher priority workloads. The challenges in this approach rely on the potential scale of systems and the need to make decisions in an application-driven way, because performance degradation can be severe, and therefore it should be minimized. Also, each tenant tends to repeat the execution of applications with similar memory-usage patterns, giving opportunity to reuse parameters known to work well for a given workload. This paper presents GC-Wise, a system to determine, at run-time, the best values for critical heap management parameters of the OpenJDK JVM, aiming to maximize memory-performance efficiency. GCWise comprises two main phases: 1) a training phase where it collects, with different heap resizing policies, representative execution metrics during the lifespan of a workload; and 2) an execution phase where an oracle matches the execution parameters of new workloads against those of already seen workloads, and enforces the best heap resizing policy. Distinctly from other works, the oracle can also decide upon unknown workloads. Using representative applications and different hardware setting (a resourceful server and a fog-like device), we show that our approach can lead to significant memory savings with low-impact on the throughput of applications. Furthermore, we show that we can predict with high accuracy the best heap resizing configuration in a relatively short period of time.
- Locality-aware GC optimisations for big data workloadsPublication . Patrício, Duarte; Bruno, Rodrigo; Simão, José; Ferreira, Paulo; Veiga, LuísMany Big Data analytics and IoT scenarios rely on fast and non-relational storage (NoSQL) to help processing massive amounts of data. In addition, managed runtimes (e.g. JVM) are now widely used to support the execution of these NoSQL storage solutions, particularly when dealing with Big Data key-value store-driven applications. The benefits of such runtimes can however be limited by automatic memory management, i.e., Garbage Collection (GC), which does not consider object locality, resulting in objects that point to each other being dispersed in memory. In the long run this may break the service-level of applications due to extra page faults and degradation of locality on system-level memory caches. We propose, LAG1 (short for Locality-Aware G1), na extension of modern heap layouts to promote locality between groups of related objects. This is done with no previous application profiling and in a way that is transparent to the programmer, without requiring changes to existing code. The heap layout and algorithmic extensions are implemented on top of the Garbage First (G1) garbage collector (the new by-default collector) of the HotSpot JVM. Using the YCSB benchmarking tool to benchmark HBase, a well-known and widely used Big Data application, we show negligible overhead in frequent operations such as the allocation of new objects, and significant improvements when accessing data, supported by higher hits in system-level memory structures.
- Oversubscribing micro-clouds with energy-aware containers schedulingPublication . Mendes, Sérgio; Simão, José; Veiga, LuísCloud computation is being pushed to the edge of the network, towards Micro-clouds, to promote more energy efficiency and less latency when compared to heavy resourced centralized datacenters. This trend will enable new markets and providers to fill the current gap. There are however challenges in this design: (i) devices have less resources, leading to a frequent use of oversubscription (ii) lack of economic incentives to both provider and application owner to cope with less than full requests fulfilled. To support this trend, the virtualization layer of Micro-clouds is currently dominated by containers, which have a small memory footprint and strong isolation properties. We propose an extension to Docker Swarm, a widely used containers orchestrator, with an oversubscribing scheduling algorithm, based on improving resources utilization to levels where the energy efficiency is maximized. This solution improves CPU and memory utilization over Spread and Binpack (Docker Swarm strategies). Althoughwe introduce a small overhead in scheduling times, our solution manages to allocate more requests, with a successful allocation rate of 83% against 57% of current solutions, measured on the scheduling of real CPU- and memoryintensive workloads (e.g. Video encoding, Key-value storages and a Deep-learning algorithm).
- Partial utility-driven scheduling for flexible SLA and pricing arbitration in cloudsPublication . Simão, José; Veiga, LuísCloud SLAs compensate customers with credits when average availability drops below certain levels. This is too inflexible because consumers lose non-measurable amounts of performance being only compensated later, in next charging cycles. We propose to schedule virtual machines (VMs), driven by range-based non-linear reductions of utility, different for classes of users and across different ranges of resource allocations: partial utility. This customer-defined metric, allows providers transferring resources between VMs in meaningful and economically efficient ways. We define a comprehensive cost model incorporating partial utility given by clients to a certain level of degradation, when VMs are allocated in overcommitted environments (Public, Private, Community Clouds). CloudSim was extended to support our scheduling model. Several simulation scenarios with synthetic and real workloads are presented, using datacenters with different dimensions regarding the number of servers and computational capacity. We show the partial utility-driven driven scheduling allows more VMs to be allocated. It brings benefits to providers, regarding revenue and resource utilization, allowing for more revenue per resource allocated and scaling well with the size of datacenters when comparing with an utility-oblivious redistribution of resources. Regarding clients, their workloads’ execution time is also improved, by incorporating an SLA-based redistribution of their VM’s computational power.
- Programming languages for data-Intensive HPC applications: A systematic mapping studyPublication . Amaral, Vasco; Norberto, Beatriz; Goulão, Miguel; Aldinucci, Marco; Benkner, Siegfried; Bracciali, Andrea; Carreira, Paulo; Celms, Edgars; Correia, Luís; Grelck, Clemens; Karatza, Helen; Kessler, Christoph; Kilpatrick, Peter; Martiniano, Hugo; Mavridis, Ilias; PLLANA, Sabri; Respicio, Ana; Simão, José; Veiga, Luís; Visa, Ari Juha EljasA major challenge in modelling and simulation is the need to combine expertise in both software technologies and a given scientific domain. When High-Performance Computing (HPC) is required to solve a scientific problem, software development becomes a problematic issue. Considering the complexity of the software for HPC, it is useful to identify programming languages that can be used to alleviate this issue. Because the existing literature on the topic of HPC is very dispersed, we performed a Systematic Mapping Study (SMS) in the context of the European COST Action cHiPSet. This literature study maps characteristics of various programming languages for data-intensive HPC applications, including category, typical user profiles, effectiveness, and type of articles. We organised the SMS in two phases. In the first phase, relevant articles are identified employing an automated keyword-based search in eight digital libraries. This lead to an initial sample of 420 papers, which was then narrowed down in a second phase by human inspection of article abstracts, titles and keywords to 152 relevant articles published in the period 2006-2018. The analysis of these articles enabled us to identify 26 programming languages referred to in 33 of relevant articles. We compared the outcome of the mapping study with results of our questionnaire-based survey that involved 57 HPC experts. The mapping study and the survey revealed that the desired features of programming languages for data-intensive HPC applications are portability, performance and usability. Furthermore, we observed that the majority of the programming languages used in the context of data-intensive HPC applications are text-based general-purpose programming languages. Typically these have a steep learning curve, which makes them difficult to adopt. We believe that the outcome of this study will inspire future research and development in programming languages for data-intensive HPC applications.
- Runtime object lifetime profiler for latency sensitive big data applicationsPublication . Rodrigo, Bruno; Patrício, Duarte; Simão, José; Veiga, Luís; Ferreira, PauloLatency sensitive services such as credit-card fraud detection and website targeted advertisement rely on Big Data platforms which run on top of memory managed runtimes, such as the Java Virtual Machine (JVM). These platforms, however, suffer from unpredictable and unacceptably high pause times due to inadequate memory management decisions (e.g., allocating objects with very different lifetimes next to each other, resulting in severe memory fragmentation). This leads to frequent and long application pause times, breaking Service Level Agreements (SLAs). This problem has been previously identified, and results show that current memory management techniques are ill-suited for applications that hold in memory massive amounts of long-lived objects (which is the case for a wide spectrum of Big Data applications). Previous works reduce such application pauses by allocating objects in off-heap, in special allocation regions/generations, or by using ultra-low latency Garbage Collectors (GC). However, all these solutions either require a combination of programmer effort and knowledge, source code access, offline profiling (with clear negative impacts on programmer's productivity), or impose a significant impact on application throughput and/or memory to reduce application pauses. We propose ROLP, a Runtime Object Lifetime Profiler that profiles application code at runtime and helps pretenuring GC algorithms allocating objects with similar lifetimes close to each other so that the overall fragmentation, GC effort, and application pauses are reduced. ROLP is implemented for the OpenJDK 8 and was evaluated with a recently proposed open-source pretenuring collector (NG2C). Results show long tail latencies reductions of up to 51% for Lucene, 85% for GraphChi, and 69% for Cassandra. This is achieved with negligible throughput (< 6%) and memory overhead, with no programmer effort, and no source code access.
