958 resultados para Utilization of resources


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertação de Mestrado apresentado ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Empreendedorismo e Internacionalização, sob orientação de Maria Clara Dias Pinto Ribeiro

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The interest in zero-valent iron nanoparticles has been increasing significantly since the development of a green production method in which extracts from natural products or wastes are used. However, this field of application is yet poorly studied and lacks knowledge that allows the full understanding of the production and application processes. The aim of the present work was to evaluate the viability of the utilization of several tree leaves to produce extracts which are capable of reducing iron(III) in aqueous solution to form nZVIs. The quality of the extracts was evaluated concerning their antioxidant capacity. The results show that: i) dried leaves produce extracts with higher antioxidant capacities than non-dried leaves, ii) the most favorable extraction conditions (temperature, contact time, and volume:mass ratio) were identified for each leaf, iii) with the aim of developing a green, but also low-cost,method waterwas chosen as solvent, iv) the extracts can be classified in three categories according to their antioxidant capacity (expressed as Fe(II) concentration): >40 mmol L−1; 20–40 mmol L−1; and 2–10 mmol L−1; with oak, pomegranate and green tea leaves producing the richest extracts, and v) TEManalysis proves that nZVIs (d=10–20 nm) can be produced using the tree leaf extracts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Today all kinds of innovations and research work is done by partnerships of competent entities each having some specialized skills. Like the development of the global economy, global innovation partnerships have grown considerably and form the basis of most of the sophisticated innovations today. To further streamline and simplify such cooperation, several innovation networks have been formed, both at local and global levels. This paper discusses the different types of innovations and how cooperation can benefit innovation in terms of pooling of resources and sharing of risks. One example of an open global co-innovation network promoted by Tata Consultancy Services, the TCS COIN is taken as a case. It enables venture capitalists, consultants, research agencies, companies and universities form nodes of the network so that each entity can play a meaningful role in the innovation network. Further, two innovation projects implemented using the COIN are discussed. Innovation Networks like these could form the basis of a unique global innovation network, which is not owned by any company and is used by innovation partners globally to collaborate and conduct research and development.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cloud SLAs compensate customers with credits when average availability drops below certain levels. This is too inflexible because consumers lose non-measurable amounts of performance being only compensated later, in next charging cycles. We propose to schedule virtual machines (VMs), driven by range-based non-linear reductions of utility, different for classes of users and across different ranges of resource allocations: partial utility. This customer-defined metric, allows providers transferring resources between VMs in meaningful and economically efficient ways. We define a comprehensive cost model incorporating partial utility given by clients to a certain level of degradation, when VMs are allocated in overcommitted environments (Public, Private, Community Clouds). CloudSim was extended to support our scheduling model. Several simulation scenarios with synthetic and real workloads are presented, using datacenters with different dimensions regarding the number of servers and computational capacity. We show the partial utility-driven driven scheduling allows more VMs to be allocated. It brings benefits to providers, regarding revenue and resource utilization, allowing for more revenue per resource allocated and scaling well with the size of datacenters when comparing with an utility-oblivious redistribution of resources. Regarding clients, their workloads’ execution time is also improved, by incorporating an SLA-based redistribution of their VM’s computational power.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE To assess the inequalities in access, utilization, and quality of health care services according to the socioeconomic status. METHODS This population-based cross-sectional study evaluated 2,927 individuals aged ≥ 20 years living in Pelotas, RS, Southern Brazil, in 2012. The associations between socioeconomic indicators and the following outcomes were evaluated: lack of access to health services, utilization of services, waiting period (in days) for assistance, and waiting time (in hours) in lines. We used Poisson regression for the crude and adjusted analyses. RESULTS The lack of access to health services was reported by 6.5% of the individuals who sought health care. The prevalence of use of health care services in the 30 days prior to the interview was 29.3%. Of these, 26.4% waited five days or more to receive care and 32.1% waited at least an hour in lines. Approximately 50.0% of the health care services were funded through the Unified Health System. The use of health care services was similar across socioeconomic groups. The lack of access to health care services and waiting time in lines were higher among individuals of lower economic status, even after adjusting for health care needs. The waiting period to receive care was higher among those with higher socioeconomic status. CONCLUSIONS Although no differences were observed in the use of health care services across socioeconomic groups, inequalities were evident in the access to and quality of these services.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ABSTRACT OBJECTIVE : To analyze if the demographic and socioeconomic variables, as well as percutaneous coronary intervention are associated with the use of medicines for secondary prevention of acute coronary syndrome. METHODS : In this cohort study, we included 138 patients with acute coronary syndrome, aged 30 years or more and of both sexes. The data were collected at the time of hospital discharge, and after six and twelve months. The outcome of the study was the simultaneous use of medicines recommended for secondary prevention of acute coronary syndrome: platelet antiaggregant, beta-blockers, statins and angiotensin-converting-enzyme inhibitor or angiotensin receptor blocker. The independent variables were: sex, age, education in years of attending, monthly income in tertiles and percutaneous coronary intervention. We described the prevalence of use of each group of medicines with their 95% confidence intervals, as well as the simultaneous use of the four medicines, in all analyzed periods. In the crude analysis, we verified the outcome with the independent variables for each period through the Chi-square test. The adjusted analysis was carried out using Poisson Regression. RESULTS : More than a third of patients (36.2%; 95%CI 28.2;44.3) had the four medicines prescribed at the same time, at the moment of discharge. We did not observe any differences in the prevalence of use in comparison with the two follow-up periods. The most prescribed class of medicines during discharge was platelet antiaggregant (91.3%). In the crude analysis, the demographic and socioeconomic variables were not associated to the outcome in any of the three periods. CONCLUSIONS : The prevalence of simultaneous use of medicines at discharge and in the follow-ups pointed to the under-utilization of this therapy in clinical practice. Intervention strategies are needed to improve the quality of care given to patients that extend beyond the hospital discharge, a critical point of transition in care.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertation presented in partial fulfillment of the requirements for the degree of Master in Biotechnology

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ABSTRACT OBJECTIVE To develop an assessment tool to evaluate the efficiency of federal university general hospitals. METHODS Data envelopment analysis, a linear programming technique, creates a best practice frontier by comparing observed production given the amount of resources used. The model is output-oriented and considers variable returns to scale. Network data envelopment analysis considers link variables belonging to more than one dimension (in the model, medical residents, adjusted admissions, and research projects). Dynamic network data envelopment analysis uses carry-over variables (in the model, financing budget) to analyze frontier shift in subsequent years. Data were gathered from the information system of the Brazilian Ministry of Education (MEC), 2010-2013. RESULTS The mean scores for health care, teaching and research over the period were 58.0%, 86.0%, and 61.0%, respectively. In 2012, the best performance year, for all units to reach the frontier it would be necessary to have a mean increase of 65.0% in outpatient visits; 34.0% in admissions; 12.0% in undergraduate students; 13.0% in multi-professional residents; 48.0% in graduate students; 7.0% in research projects; besides a decrease of 9.0% in medical residents. In the same year, an increase of 0.9% in financing budget would be necessary to improve the care output frontier. In the dynamic evaluation, there was progress in teaching efficiency, oscillation in medical care and no variation in research. CONCLUSIONS The proposed model generates public health planning and programming parameters by estimating efficiency scores and making projections to reach the best practice frontier.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tese de Mestrado em Engenharia Informática

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The teaching-learning process is increasingly focused on the combination of the paradigms “learning by viewing” and “learning by doing.” In this context, educational resources, either expository or evaluative, play a pivotal role. Both types of resources are interdependent and their sequencing would create a richer educational experience to the end user. However, there is a lack of tools that support sequencing essentially due to the fact that existing specifications are complex. The Seqins is a sequencing tool of digital resources that has a fairly simple sequencing model. The tool communicates through the IMS LTI specification with a plethora of e-learning systems such as learning management systems, repositories, authoring and evaluation systems. In order to validate Seqins we integrate it in an e-learning Ensemble framework instance for the computer programming learning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nucleic Acid Research (2007) Vol.37 N. 14 4755-4766

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The massification of electric vehicles (EVs) can have a significant impact on the power system, requiring a new approach for the energy resource management. The energy resource management has the objective to obtain the optimal scheduling of the available resources considering distributed generators, storage units, demand response and EVs. The large number of resources causes more complexity in the energy resource management, taking several hours to reach the optimal solution which requires a quick solution for the next day. Therefore, it is necessary to use adequate optimization techniques to determine the best solution in a reasonable amount of time. This paper presents a hybrid artificial intelligence technique to solve a complex energy resource management problem with a large number of resources, including EVs, connected to the electric network. The hybrid approach combines simulated annealing (SA) and ant colony optimization (ACO) techniques. The case study concerns different EVs penetration levels. Comparisons with a previous SA approach and a deterministic technique are also presented. For 2000 EVs scenario, the proposed hybrid approach found a solution better than the previous SA version, resulting in a cost reduction of 1.94%. For this scenario, the proposed approach is approximately 94 times faster than the deterministic approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work describes the utilization of Pulsed Electric Fields to control the protozoan contamination of a microalgae culture, in an industrial 2.7m3 microalgae photobioreactor. The contaminated culture was treated with Pulsed Electric Fields, PEF, for 6h with an average of 900V/cm, 65μs pulses of 50Hz. Working with recirculation, all the culture was uniformly exposed to the PEF throughout the assay. The development of the microalgae and protozoan populations was followed and the results showed that PEF is effective on the selective elimination of protozoa from microalgae cultures, inflicting on the protozoa growth halt, death or cell rupture, without affecting microalgae productivity. Specifically, the results show a reduction of the active protozoan population of 87% after 6h treatment and 100% after few days of normal cultivation regime. At the same time, microalgae growth rate remained unaffected. © 2014 Elsevier B.V.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Hard real- time multiprocessor scheduling has seen, in recent years, the flourishing of semi-partitioned scheduling algorithms. This category of scheduling schemes combines elements of partitioned and global scheduling for the purposes of achieving efficient utilization of the system’s processing resources with strong schedulability guarantees and with low dispatching overheads. The sub-class of slot-based “task-splitting” scheduling algorithms, in particular, offers very good trade-offs between schedulability guarantees (in the form of high utilization bounds) and the number of preemptions/migrations involved. However, so far there did not exist unified scheduling theory for such algorithms; each one was formulated in its own accompanying analysis. This article changes this fragmented landscape by formulating a more unified schedulability theory covering the two state-of-the-art slot-based semi-partitioned algorithms, S-EKG and NPS-F (both fixed job-priority based). This new theory is based on exact schedulability tests, thus also overcoming many sources of pessimism in existing analysis. In turn, since schedulability testing guides the task assignment under the schemes in consideration, we also formulate an improved task assignment procedure. As the other main contribution of this article, and as a response to the fact that many unrealistic assumptions, present in the original theory, tend to undermine the theoretical potential of such scheduling schemes, we identified and modelled into the new analysis all overheads incurred by the algorithms in consideration. The outcome is a new overhead-aware schedulability analysis that permits increased efficiency and reliability. The merits of this new theory are evaluated by an extensive set of experiments.