900 resultados para Operational and network efficiency
Resumo:
We review mathematical aspects of biophysical dynamics, signal transduction and network architecture that have been used to uncover functionally significant relations between the dynamics of single neurons and the networks they compose. We focus on examples that combine insights from these three areas to expand our understanding of systems neuroscience. These range from single neuron coding to models of decision making and electrosensory discrimination by networks and populations, as well as coincidence detection in pairs of dendrites and the dynamics of large networks of excitable dendritic spines. We conclude by describing some of the challenges that lie ahead as the applied mathematics community seeks to provide the tools that will ultimately underpin systems neuroscience.
Resumo:
When ambient air quality standards established in the EU Directive 2008/50/EC are exceeded, Member States are obliged to develop and implement Air Quality Plans (AQP) to improve air quality and health. Notwithstanding the achievements in emission reductions and air quality improvement, additional efforts need to be undertaken to improve air quality in a sustainable way - i.e. through a cost-efficiency approach. This work was developed in the scope of the recently concluded MAPLIA project "Moving from Air Pollution to Local Integrated Assessment", and focuses on the definition and assessment of emission abatement measures and their associated costs, air quality and health impacts and benefits by means of air quality modelling tools, health impact functions and cost-efficiency analysis. The MAPLIA system was applied to the Grande Porto urban area (Portugal), addressing PM10 and NOx as the most important pollutants in the region. Four different measures to reduce PM10 and NOx emissions were defined and characterized in terms of emissions and implementation costs, and combined into 15 emission scenarios, simulated by the TAPM air quality modelling tool. Air pollutant concentration fields were then used to estimate health benefits in terms of avoided costs (external costs), using dose-response health impact functions. Results revealed that, among the 15 scenarios analysed, the scenario including all 4 measures lead to a total net benefit of 0.3M€·y(-1). The largest net benefit is obtained for the scenario considering the conversion of 50% of open fire places into heat recovery wood stoves. Although the implementation costs of this measure are high, the benefits outweigh the costs. Research outcomes confirm that the MAPLIA system is useful for policy decision support on air quality improvement strategies, and could be applied to other urban areas where AQP need to be implemented and monitored.
Resumo:
The proliferation of new mobile communication devices, such as smartphones and tablets, has led to an exponential growth in network traffic. The demand for supporting the fast-growing consumer data rates urges the wireless service providers and researchers to seek a new efficient radio access technology, which is the so-called 5G technology, beyond what current 4G LTE can provide. On the other hand, ubiquitous RFID tags, sensors, actuators, mobile phones and etc. cut across many areas of modern-day living, which offers the ability to measure, infer and understand the environmental indicators. The proliferation of these devices creates the term of the Internet of Things (IoT). For the researchers and engineers in the field of wireless communication, the exploration of new effective techniques to support 5G communication and the IoT becomes an urgent task, which not only leads to fruitful research but also enhance the quality of our everyday life. Massive MIMO, which has shown the great potential in improving the achievable rate with a very large number of antennas, has become a popular candidate. However, the requirement of deploying a large number of antennas at the base station may not be feasible in indoor scenarios. Does there exist a good alternative that can achieve similar system performance to massive MIMO for indoor environment? In this dissertation, we address this question by proposing the time-reversal technique as a counterpart of massive MIMO in indoor scenario with the massive multipath effect. It is well known that radio signals will experience many multipaths due to the reflection from various scatters, especially in indoor environments. The traditional TR waveform is able to create a focusing effect at the intended receiver with very low transmitter complexity in a severe multipath channel. TR's focusing effect is in essence a spatial-temporal resonance effect that brings all the multipaths to arrive at a particular location at a specific moment. We show that by using time-reversal signal processing, with a sufficiently large bandwidth, one can harvest the massive multipaths naturally existing in a rich-scattering environment to form a large number of virtual antennas and achieve the desired massive multipath effect with a single antenna. Further, we explore the optimal bandwidth for TR system to achieve maximal spectral efficiency. Through evaluating the spectral efficiency, the optimal bandwidth for TR system is found determined by the system parameters, e.g., the number of users and backoff factor, instead of the waveform types. Moreover, we investigate the tradeoff between complexity and performance through establishing a generalized relationship between the system performance and waveform quantization in a practical communication system. It is shown that a 4-bit quantized waveforms can be used to achieve the similar bit-error-rate compared to the TR system with perfect precision waveforms. Besides 5G technology, Internet of Things (IoT) is another terminology that recently attracts more and more attention from both academia and industry. In the second part of this dissertation, the heterogeneity issue within the IoT is explored. One of the significant heterogeneity considering the massive amount of devices in the IoT is the device heterogeneity, i.e., the heterogeneous bandwidths and associated radio-frequency (RF) components. The traditional middleware techniques result in the fragmentation of the whole network, hampering the objects interoperability and slowing down the development of a unified reference model for the IoT. We propose a novel TR-based heterogeneous system, which can address the bandwidth heterogeneity and maintain the benefit of TR at the same time. The increase of complexity in the proposed system lies in the digital processing at the access point (AP), instead of at the devices' ends, which can be easily handled with more powerful digital signal processor (DSP). Meanwhile, the complexity of the terminal devices stays low and therefore satisfies the low-complexity and scalability requirement of the IoT. Since there is no middleware in the proposed scheme and the additional physical layer complexity concentrates on the AP side, the proposed heterogeneous TR system better satisfies the low-complexity and energy-efficiency requirement for the terminal devices (TDs) compared with the middleware approach.
Resumo:
This paper presents the results of a research that aimed at identifying optimal performance standards of Brazilian public and philanthropic hospitals. In order to carry out the analysis, a model based on Data Envelopment Analysis (DEA) was developed. We collected financial data from hospitals’ financial statements available on the internet, as well as operational data from the Information Technology Department of the Brazilian Public Health Care System – SUS (DATASUS). Data from 18 hospitals from 2007 to 2011 were analyzed. Our DEA model used both operational and financial indicators (variables). In order to develop this model, two indicators were considered inputs: Values (in Brazilian Reais) of Fixed Assets and Planned Capacity. On the other hand, the following indicators were considered outputs: Net Margin, Return on Assets and Institutional Mortality Rate. As regards the proposed model, there were five hospitals with optimal performance and four hospitals were considered inefficient, upon the analysis of the variables, considering the analyzed period. Analysis of the weights indicated the most relevant variables for determining efficiency and scale variable values, which is an important tool to aid the decision-making by hospital managers. Finally, the scale variables determined the returns on production, indicating that 14 hospitals work with scale diseconomies. This may indicate inefficiency in the resource management of the Brazilian public health-care system, by analyzing this set of proposed variables.
Resumo:
In a previous work (Nicu et al. 2013), the flocculation efficiency of three chitosans differing by molecular weight and charge density were evaluated for their potential use as wet end additives in papermaking. According to the promising results obtained, chitosan (single system) and its combination with bentonite (dual system) were evaluated as retention aids, and their efficiency was compared with poly(diallyl dimethyl ammonium chloride) (PDADMAC) and polyethylenimine (PEI). In single systems, chitosan was clearly more efficient in drainage rate than PDADMAC and PEI, especially those with the lowest molecular weights; however, retention is considerably lower. This drawback can be overcome by using dual systems with anionic bentonite microparticles, with the optimum ratio of polymer:bentonite being 1:4 (wt./wt.). In dual systems, the differences in retention were almost negligible, and the difference in drainage rate was even higher, together with better floc reversibility. The most efficient chitosan in single systems was Ch.MMW, while Ch.LMW was the most efficient in dual systems. The flocculation mechanism of chitosan was a combination of patch formation, charge neutralization, and partial bridge formation, and the predominant mechanism depended on the molecular weight and charge density of the chitosan.
Resumo:
Nowadays, application domains such as smart cities, agriculture or intelligent transportation, require communication technologies that combine long transmission ranges and energy efficiency to fulfill a set of capabilities and constraints to rely on. In addition, in recent years, the interest in Unmanned Aerial Vehicles (UAVs) providing wireless connectivity in such scenarios is substantially increased thanks to their flexible deployment. The first chapters of this thesis deal with LoRaWAN and Narrowband-IoT (NB-IoT), which recent trends identify as the most promising Low Power Wide Area Networks technologies. While LoRaWAN is an open protocol that has gained a lot of interest thanks to its simplicity and energy efficiency, NB-IoT has been introduced from 3GPP as a radio access technology for massive machine-type communications inheriting legacy LTE characteristics. This thesis offers an overview of the two, comparing them in terms of selected performance indicators. In particular, LoRaWAN technology is assessed both via simulations and experiments, considering different network architectures and solutions to improve its performance (e.g., a new Adaptive Data Rate algorithm). NB-IoT is then introduced to identify which technology is more suitable depending on the application considered. The second part of the thesis introduces the use of UAVs as flying Base Stations, denoted as Unmanned Aerial Base Stations, (UABSs), which are considered as one of the key pillars of 6G to offer service for a number of applications. To this end, the performance of an NB-IoT network are assessed considering a UABS following predefined trajectories. Then, machine learning algorithms based on reinforcement learning and meta-learning are considered to optimize the trajectory as well as the radio resource management techniques the UABS may rely on in order to provide service considering both static (IoT sensors) and dynamic (vehicles) users. Finally, some experimental projects based on the technologies mentioned so far are presented.
Resumo:
Deep Neural Networks (DNNs) have revolutionized a wide range of applications beyond traditional machine learning and artificial intelligence fields, e.g., computer vision, healthcare, natural language processing and others. At the same time, edge devices have become central in our society, generating an unprecedented amount of data which could be used to train data-hungry models such as DNNs. However, the potentially sensitive or confidential nature of gathered data poses privacy concerns when storing and processing them in centralized locations. To this purpose, decentralized learning decouples model training from the need of directly accessing raw data, by alternating on-device training and periodic communications. The ability of distilling knowledge from decentralized data, however, comes at the cost of facing more challenging learning settings, such as coping with heterogeneous hardware and network connectivity, statistical diversity of data, and ensuring verifiable privacy guarantees. This Thesis proposes an extensive overview of decentralized learning literature, including a novel taxonomy and a detailed description of the most relevant system-level contributions in the related literature for privacy, communication efficiency, data and system heterogeneity, and poisoning defense. Next, this Thesis presents the design of an original solution to tackle communication efficiency and system heterogeneity, and empirically evaluates it on federated settings. For communication efficiency, an original method, specifically designed for Convolutional Neural Networks, is also described and evaluated against the state-of-the-art. Furthermore, this Thesis provides an in-depth review of recently proposed methods to tackle the performance degradation introduced by data heterogeneity, followed by empirical evaluations on challenging data distributions, highlighting strengths and possible weaknesses of the considered solutions. Finally, this Thesis presents a novel perspective on the usage of Knowledge Distillation as a mean for optimizing decentralized learning systems in settings characterized by data heterogeneity or system heterogeneity. Our vision on relevant future research directions close the manuscript.
Resumo:
In rural and isolated areas without cellular coverage, Satellite Communication (SatCom) is the best candidate to complement terrestrial coverage. However, the main challenge for future generations of wireless networks will be to meet the growing demand for new services while dealing with the scarcity of frequency spectrum. As a result, it is critical to investigate more efficient methods of utilizing the limited bandwidth; and resource sharing is likely the only choice. The research community’s focus has recently shifted towards the interference management and exploitation paradigm to meet the increasing data traffic demands. In the Downlink (DL) and Feedspace (FS), LEO satellites with an on-board antenna array can offer service to numerous User Terminals (UTs) (VSAT or Handhelds) on-ground in FFR schemes by using cutting-edge digital beamforming techniques. Considering this setup, the adoption of an effective user scheduling approach is a critical aspect given the unusually high density of User terminals on the ground as compared to the on-board available satellite antennas. In this context, one possibility is that of exploiting clustering algorithms for scheduling in LEO MU-MIMO systems in which several users within the same group are simultaneously served by the satellite via Space Division Multiplexing (SDM), and then these different user groups are served in different time slots via Time Division Multiplexing (TDM). This thesis addresses this problem by defining a user scheduling problem as an optimization problem and discusses several algorithms to solve it. In particular, focusing on the FS and user service link (i.e., DL) of a single MB-LEO satellite operating below 6 GHz, the user scheduling problem in the Frequency Division Duplex (FDD) mode is addressed. The proposed State-of-the-Art scheduling approaches are based on graph theory. The proposed solution offers high performance in terms of per-user capacity, Sum-rate capacity, SINR, and Spectral Efficiency.
Resumo:
In recent decades, two prominent trends have influenced the data modeling field, namely network analysis and machine learning. This thesis explores the practical applications of these techniques within the domain of drug research, unveiling their multifaceted potential for advancing our comprehension of complex biological systems. The research undertaken during this PhD program is situated at the intersection of network theory, computational methods, and drug research. Across six projects presented herein, there is a gradual increase in model complexity. These projects traverse a diverse range of topics, with a specific emphasis on drug repurposing and safety in the context of neurological diseases. The aim of these projects is to leverage existing biomedical knowledge to develop innovative approaches that bolster drug research. The investigations have produced practical solutions, not only providing insights into the intricacies of biological systems, but also allowing the creation of valuable tools for their analysis. In short, the achievements are: • A novel computational algorithm to identify adverse events specific to fixed-dose drug combinations. • A web application that tracks the clinical drug research response to SARS-CoV-2. • A Python package for differential gene expression analysis and the identification of key regulatory "switch genes". • The identification of pivotal events causing drug-induced impulse control disorders linked to specific medications. • An automated pipeline for discovering potential drug repurposing opportunities. • The creation of a comprehensive knowledge graph and development of a graph machine learning model for predictions. Collectively, these projects illustrate diverse applications of data science and network-based methodologies, highlighting the profound impact they can have in supporting drug research activities.
Resumo:
Insulin was used as model protein to developed innovative Solid Lipid Nanoparticles (SLNs) for the delivery of hydrophilic biotech drugs, with potential use in medicinal chemistry. SLNs were prepared by double emulsion with the purpose of promoting stability and enhancing the protein bioavailability. Softisan(®)100 was selected as solid lipid matrix. The surfactants (Tween(®)80, Span(®)80 and Lipoid(®)S75) and insulin were chosen applying a 2(2) factorial design with triplicate of central point, evaluating the influence of dependents variables as polydispersity index (PI), mean particle size (z-AVE), zeta potential (ZP) and encapsulation efficiency (EE) by factorial design using the ANOVA test. Therefore, thermodynamic stability, polymorphism and matrix crystallinity were checked by Differential Scanning Calorimetry (DSC) and Wide Angle X-ray Diffraction (WAXD), whereas the effect of toxicity of SLNs was check in HepG2 and Caco-2 cells. Results showed a mean particle size (z-AVE) width between 294.6 nm and 627.0 nm, a PI in the range of 0.425-0.750, ZP about -3 mV, and the EE between 38.39% and 81.20%. After tempering the bulk lipid (mimicking the end process of production), the lipid showed amorphous characteristics, with a melting point of ca. 30 °C. The toxicity of SLNs was evaluated in two distinct cell lines (HEPG-2 and Caco-2), showing to be dependent on the concentration of particles in HEPG-2 cells, while no toxicity in was reported in Caco-2 cells. SLNs were stable for 24 h in in vitro human serum albumin (HSA) solution. The resulting SLNs fabricated by double emulsion may provide a promising approach for administration of protein therapeutics and antigens.
Resumo:
Context. The distribution of chemical abundances and their variation with time are important tools for understanding the chemical evolution of galaxies. In particular, the study of chemical evolution models can improve our understanding of the basic assumptions made when modelling our Galaxy and other spirals. Aims. We test a standard chemical evolution model for spiral disks in the Local Universe and study the influence of a threshold gas density and different efficiencies in the star formation rate (SFR) law on radial gradients of abundance, gas, and SFR. The model is then applied to specific galaxies. Methods. We adopt a one-infall chemical evolution model where the Galactic disk forms inside-out by means of infall of gas, and we test different thresholds and efficiencies in the SFR. The model is scaled to the disk properties of three Local Group galaxies (the Milky Way, M31 and M33) by varying its dependence on the star formation efficiency and the timescale for the infall of gas onto the disk. Results. Using this simple model, we are able to reproduce most of the observed constraints available in the literature for the studied galaxies. The radial oxygen abundance gradients and their time evolution are studied in detail. The present day abundance gradients are more sensitive to the threshold than to other parameters, while their temporal evolutions are more dependent on the chosen SFR efficiency. A variable efficiency along the galaxy radius can reproduce the present day gas distribution in the disk of spirals with prominent arms. The steepness in the distribution of stellar surface density differs from massive to lower mass disks, owing to the different star formation histories. Conclusions. The most massive disks seem to have evolved faster (i.e., with more efficient star formation) than the less massive ones, thus suggesting a downsizing in star formation for spirals. The threshold and the efficiency of star formation play a very important role in the chemical evolution of spiral disks. For instance, an efficiency varying with radius can be used to regulate the star formation. The oxygen abundance gradient can steepen or flatten in time depending on the choice of this parameter.
Resumo:
Background: Polymorphisms of the mannose-binding lectin gene (MBL2) affect the concentration and functional efficiency of the protein. We recently used haplotype-specific sequencing to identify 23 MBL2 haplotypes, associated with enhanced susceptibility to several diseases. Results: In this work, we applied the same method in 288 and 470 chromosomes from Gabonese and European adults, respectively, and found three new haplotypes in the last group. We propose a phylogenetic nomenclature to standardize MBL2 studies and found two major phylogenetic branches due to six strongly linked polymorphisms associated with high MBL production. They presented high Fst values and were imbedded in regions with high nucleotide diversity and significant Tajima's D values. Compared to others using small sample sizes and unphased genotypic data, we found differences in haplotyping, frequency estimation, Fu and Li's D* and Fst results. Conclusion: Using extensive testing for selective neutrality, we confirmed that stochastic evolutionary factors have had a major role in shaping this polymorphic gene worldwide.
Resumo:
This work aimed to study the possible alterations in production, accumulation of the vegetative phytomass and nitrogen efficiency use of the maize crop, in different doses of N applied in the fertilization, by using the technique of isotopic dilution of (15)N. The completely randomized block experimental design was adopted, with 5 treatments and 4 replicates. The following treatments were constituted in the doses in covering: 0, 50, 100, 150 and 200 kg ha(-1) of N, with fertilization of N-urea, respectively. Comparisons among the treatments had been run for crop productivity; nitrogen accumulation for the plant, and use of the nitrogen of the urea-(15)N for the crop. The increase of the dose of N-fertilizer resulted in increase of the dry matter mass, of the dry matter yield crop tax, of the productivity and accumulation of N in the maize plants.
Resumo:
BACKGROUND: Xylitol is a sugar alcohol (polyalcohol) with many interesting properties for pharmaceutical and food products. It is currently produced by a chemical process, which has some disadvantages such as high energy requirement. Therefore microbiological production of xylitol has been studied as an alternative, but its viability is dependent on optimisation of the fermentation variables. Among these, aeration is fundamental, because xylitol is produced only under adequate oxygen availability. In most experiments with xylitol-producing yeasts, low oxygen transfer volumetric coefficient (K(L)a) values are used to maintain microaerobic conditions. However, in the present study the use of relatively high K(L)a values resulted in high xylitol production. The effect of aeration was also evaluated via the profiles of xylose reductase (XR) and xylitol clehydrogenase (XD) activities during the experiments. RESULTS: The highest XR specific activity (1.45 +/- 0.21 U mg(protein)(-1)) was achieved during the experiment with the lowest K(L)a value (12 h(-1)), while the highest XD specific activity (0.19 +/- 0.03 U mg(protein)(-1)) was observed with a K(L)a value of 25 h(-1). Xylitol production was enhanced when K(L)a was increased from 12 to 50 h(-1), which resulted in the best condition observed, corresponding to a xylitol volumetric productivity of 1.50 +/- 0.08 g(xylitol) L(-1) h(-1) and an efficiency of 71 +/- 6.0%. CONCLUSION: The results showed that the enzyme activities during xylitol bioproduction depend greatly on the initial KLa value (oxygen availability). This finding supplies important information for further studies in molecular biology and genetic engineering aimed at improving xylitol bioproduction. (C) 2008 Society of Chemical Industry
Resumo:
Nowadays there are several ways of supplying hot water for showers in residential buildings. One of them is the use of electric storage water heaters (boilers). This equipment raises the water temperature in a reservoir (tank) using the heat generated by an electric resistance. The behavior of this equipment in Brazil is still a research object and there is not a standard in the country to regulate its efficiency. In this context, an experimental program was conducted aiming to collect power consumption data to evaluate its performance. The boilers underwent an operation cycle to simulate a usage condition aiming to collect parameters for calculating the efficiency. This 1-day cycle was composed of the following phases: hot water withdrawal, reheating and standby heat loss. The methods allowed the identification of different parameters concerning the boilers work, such as: standby heat loss in 24 h, hot water withdrawal rate, reheating time and energy efficiency. The average energy efficiency obtained was of 75%. The lowest efficiency was of 62% for boiler 2 and the highest was of 85% for boiler 9. (C) 2008 Elsevier B.V. All rights reserved.