ISEL - Eng. Elect. Tel. Comp. - Artigos
Permanent URI for this collection
Browse
Browsing ISEL - Eng. Elect. Tel. Comp. - Artigos by Title
Now showing 1 - 10 of 338
Results Per Page
Sort Options
- A 3-phase model for VIS/NIR mu C-Si : H p-i-n detectorsPublication . Vieira, Manuela; Fantoni, Alessandro; Fernandes, Miguel; Maçarico, António Filipe Ruas Trindade; Schwarz, R.The spectral response and the photocurrent delivered by entirely microcrystalline p-i-n-Si:H detectors an analysed under different applied bias and light illumination conditions. The spectral response and the internal collection depend not only on the energy range but also on the illumination side. Under [p]- and [n]-side irradiation, the internal collection characteristics have an atypical shape. It is high for applied bias and lower than the open circuit voltage, shows a steep decrease near the open circuit voltage (higher under [n]-side illumination) and levels off for higher voltages. Additionally, the numerical modeling of the VIS/NIR detector, based on the band discontinuities near the grain boundaries and interfaces, complements the study and gives insight into the internal physical process.
- 3D antenna characterization for WPT applicationsPublication . Jordão, Marina; Pires, Diogo; Belo, Daniel; Pinho, Pedro; Carvalho, Nuno BorgesThe main goal of this paper is to present a three-dimensional (3D) antenna array to improve the performance of wireless power transmission (WPT) systems, as well as its characterization with over-the-air (OTA) multi-sine techniques. The 3D antenna consists of 15 antenna elements attached to an alternative 3D structure, allowing energy to be transmitted to all azimuth directions at different elevation angles without moving. The OTA multi-sine characterization technique was first utilized to identify issues in antenna arrays. However, in this work, the technique is used to identify which elements of the 3D antenna should operate to transmit the energy in a specific direction. Besides, the 3D antenna design description and its characterization are performed to authenticate its operation. Since 3D antennas are an advantage in WPT applications, the antenna is evaluated in a real WPT scenario to power an RF-DC converter, and experimental results are presented.
- Adaptive deblocking filter for transform domain Wyner-Ziv video codingPublication . Martins, R.; Brites, C.; Ascenso, Joao; Pereira, F.Wyner-Ziv (WZ) video coding is a particular case of distributed video coding, the recent video coding paradigm based on the Slepian-Wolf and Wyner-Ziv theorems that exploits the source correlation at the decoder and not at the encoder as in predictive video coding. Although many improvements have been done over the last years, the performance of the state-of-the-art WZ video codecs still did not reach the performance of state-of-the-art predictive video codecs, especially for high and complex motion video content. This is also true in terms of subjective image quality mainly because of a considerable amount of blocking artefacts present in the decoded WZ video frames. This paper proposes an adaptive deblocking filter to improve both the subjective and objective qualities of the WZ frames in a transform domain WZ video codec. The proposed filter is an adaptation of the advanced deblocking filter defined in the H.264/AVC (advanced video coding) standard to a WZ video codec. The results obtained confirm the subjective quality improvement and objective quality gains that can go up to 0.63 dB in the overall for sequences with high motion content when large group of pictures are used.
- Adaptive predictive coding speech coding techniques applied to electrocardiogram signalsPublication . Silva, Daniel; Martins, Guilherme; Lourenço, André; Meneses, CarlosThis paper describes a lossy ECG signal coder with an adaptive predictive coding scheme initially proposed for speech coders. The predictors include linear predictive coding that takes advantage of the correlation between consecutive samples and long-term predictor that takes advantage of the signal quasi-periodicity. The prediction residue, with less dynamic range and therefore able to be encoded with less bits than the original, is transmitted sample by sample. The prediction coefficients and the amplitude of the residue are transmitted once for each heartbeat, with a negligible number of bits compared to the total bit rate. The long-term predictor is shown to obtain reliable performance when the heart rate does not change rapidly. Linear predictive coding, on the contrary, is more reliable and presents better prediction gain. The best developed coder uses double prediction and with 45% compression ratio allows a prediction gain of 24.8 dB.
- Advanced side information creation techniques and framework for Wyner-Ziv video codingPublication . Joao Ascenso; Pereira, FernandoRecently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.
- Agent inferencing meets the semantic webPublication . Trigo, Paulo; Coelho, HelderWe provide all agent; the capability to infer the relations (assertions) entailed by the rules that, describe the formal semantics of art RDFS knowledge-base. The proposed inferencing process formulates each semantic restriction as a rule implemented within a, SPARQL query statement. The process expands the original RDF graph into a fuller graph that. explicitly captures the rule's described semantics. The approach is currently being explored in order to support descriptions that follow the generic Semantic Web Rule Language. An experiment, using the Fire-Brigade domain, a small-scale knowledge-base, is adopted to illustrate the agent modeling method and the inferencing process.
- Algorithm-oriented design of efficient many-core architectures applied to dense matrix multiplicationPublication . José, Wilson M.; Silva, Ana Rita; Véstias, Mário; Neto, HorácioRecent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
- Aligning software engineering teaching strategies and practices with industrial needsPublication . Metrôlho, José Carlos; Ribeiro, Fernando; Graça, Paula; Mourato, Ana; Figueiredo, David; Vilarinho, HugoSeveral approaches have been proposed to reduce the gap between software engineering education and the needs and practices of the software industry. Many of them aim to promote a more active learning attitude in students and provide them with more realistic experiences, thus recreating industry software development environments and collaborative development and, in some cases, with the involvement of companies mainly acting as potential customers. Since many degree courses typically offer separate subjects to teach requirements engineering, analysis and design, coding, or validation, the integration of all these phases normally necessitates experience in a project context and is usually carried out in a final year project. The approach described in this article benefits from the close involvement of a software house company which goes beyond the common involvement of a potential customer. Students are integrated into distributed teams comprising students, teachers and IT professionals. Teams follow the agile Scrum methodology and use the OutSystems low-code development platform providing students with the experience of an almost real scenario. The results show that this approach complements the knowledge and practice acquired in course subjects, develops the students’ technical and non-technical skills, such as commitment, teamwork, and communication, and initiates them in the methodologies and development strategies used in these companies. The feedback from the teachers involved, software companies and students was very positive.
- Amorphous silicon photovoltaic modules on flexible plastic substratesPublication . Vygranenko, Yuri; Fernandes, Miguel; Louro, Paula; Vieira, Manuela; Khosropour, Alireza; Yang, Ruifeng; Sazonov, AndreiThis paper reports on a monolithic 10 cm x 10 cm area PV module integrating an array of 72 a-Si:H n-i-p cells on a 100 mu m thick polyethylene-naphtalate substrate. The n-i-p stack is deposited using a PECVD system at 150 degrees C substrate temperature. The design optimization and device performance analysis are performed using a two-dimensional distributed circuit model of the photovoltaic cell. The circuit simulator SPICE is used to calculate current and potential distributions in a network of sub-cell circuits, and also to map Joule losses in the front TCO electrode and the metal grid. Experimental results show that the shunt leakage is one of the factors reducing the device performance. Current-voltage characteristics of individual a-Si: H p-i-n cells were analyzed to estimate a variation of shunt resistances. Using the LBIC technique, the presence of multiple shunts in the n-i-p cell was detected. To understand the nature of electrical shunts, the change in the surface roughness of all device layers was analyzed throughout fabrication process. It is found that surface defects in plastic foils, which are thermally induced during the device fabrication, form microscopic pinholes filled with highly conductive top electrode material.
- An adaptive learning-based approach for vehicle mobility predictionPublication . Irio, Luís; Ip, Andre; Oliveira, Rodolfo; Luís, MiguelThis work presents an innovative methodology to predict the future trajectories of vehicles when its current and previous locations are known. We propose an algorithm to adapt the vehicles trajectories’ data based on consecutive GPS locations and to construct a statistical inference module that can be used online for mobility prediction. The inference module is based on a hidden Markov model (HMM), where each trajectory is modeled as a subset of consecutive locations. The prediction stage uses the statistical information inferred so far and is based on the Viterbi algorithm, which identifies the subset of consecutive locations (hidden information) with the maximum likelihood when a prior subset of locations are known (observations). By analyzing the disadvantages of using the Viterbi algorithm (TDVIT) when the number of hidden states increases, we propose an enhanced algorithm (OPTVIT), which decreases the prediction computation time. Offline analysis of vehicle mobility is conducted through the evaluation of a dataset containing real traces of 442 taxis running in the city of Porto, Portugal, during a full year. Experimental results obtained with the dataset show that the prediction process is improved when more information about prior vehicle mobility is available. Moreover, the computation time of the prediction process is significantly improved when OPTVIT is adopted and approximately 90% of prediction performance can be achieved, showing the effectiveness of the proposed method for vehicle trajectory prediction.