909 resultados para component environments
Resumo:
Carpooling initiated in America in the 1970s due to the oil crisis. However, over the past years, carpooling has increased significantly across the world. Some countries have created a High Occupancy Vehicle (HOV) lane to encourage commuters not to travel alone. In additional, carpool websites has been developed to facilitate the connection between the commuters, making it possible to create a compatible match in a faster and efficient manner. This project focuses on carpooling, especially in an academic environment since younger people are more likely to choose carpool. Initially, an intense research was made to examine carpool studies that occurred all over the world, following with a research of higher education institutes that use carpooling as a transportation mode. Most websites created carpools by targeting people from a specific country. These commuters have different origins and destinations making it more complicated to create compatible matches. The objective of this project is to develop a system helping teachers and students from an academic environment to create carpool matches. This objective makes it easier to create carpools because these students and teachers have the same destination. During the research, it was essential to explore, as many as possible, existing carpool websites that are available across the world. After this analysis, several sketches were made to develop the layout and structure of the web application that’s being implemented throughout the project. Once the layout was established, the development of the web application was initiated. This project had its ups and downs but it accomplished all the necessary requirements. This project can be accessed on the link: http://ipcacarpool.somee.com. Once the website was up and running, a web-based survey was developed to study the reasons that motivate people to consider carpooling as an alternative to driving alone. To develop this survey was used a tool called Survey Planet. This survey contained 408 respondents, which 391 are students and 17 are teachers. This study concludes that a majority of the respondents don’t carpool, however they will consider carpooling if there was a dedicated parking space. A majority of the respondents that carpool initiated less than a year ago, indicating that this mean of transportation is recent.
Resumo:
It is known the power of ideas is tremendous. But there are employees in many companies who have good ideas but not put them into practice. On the other hand, there are many others who have good ideas and are encouraged to contribute their ideas for innovation in the company. This study attempts to identify factors that contribute to success in managing ideas and consequent business innovation. The method used was the case study applied to two companies. During the investigation, factors considered essential for the success of an idea management program were identified, of which we highlight, among others, evidences the results, involvement of the top management, establishment of goals and objectives; recognition; dissemination of good results. Companies with these implemented systems, capture the best ideas from their collaborators and apply them internally. This study intends to contribute to business innovation in enterprises through creation and idea management, mainly through collecting the best ideas of their own employees. The results of this study can be used to help improving deployed suggestions systems, as well as, all managers who wish to implement suggestions systems/ideas management systems.
Resumo:
A biomonitoring study, using transplanted lichens Flavoparmelia caperata, was conducted to assess the indoor air quality in primary schools in urban (Lisbon) and rural (Ponte de Sor) Portuguese sites. The lichens exposure period occurred between April and June 2010 and two types of environments of the primary schools were studied: classrooms and outdoor/courtyard. Afterwards, the lichen samples were processed and analyzed by instrumental neutron activation analysis (INAA) to assess a total of 20 chemical elements. Accumulated elements in the exposed lichens were assessed and enrichment factors (EF) were determined. Indoor and outdoor biomonitoring results were compared to evaluate how biomonitors (as lichens) react at indoor environments and to assess the type of pollutants that are prevalent in those environments.
Resumo:
Object-oriented programming languages presently are the dominant paradigm of application development (e. g., Java,. NET). Lately, increasingly more Java applications have long (or very long) execution times and manipulate large amounts of data/information, gaining relevance in fields related with e-Science (with Grid and Cloud computing). Significant examples include Chemistry, Computational Biology and Bio-informatics, with many available Java-based APIs (e. g., Neobio). Often, when the execution of such an application is terminated abruptly because of a failure (regardless of the cause being a hardware of software fault, lack of available resources, etc.), all of its work already performed is simply lost, and when the application is later re-initiated, it has to restart all its work from scratch, wasting resources and time, while also being prone to another failure and may delay its completion with no deadline guarantees. Our proposed solution to address these issues is through incorporating mechanisms for checkpointing and migration in a JVM. These make applications more robust and flexible by being able to move to other nodes, without any intervention from the programmer. This article provides a solution to Java applications with long execution times, by extending a JVM (Jikes research virtual machine) with such mechanisms. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
The aim of this study was to contribute to the assessment of exposure levels of ultrafine particles in the urban environment of Lisbon, Portugal, due to automobile traffic, by monitoring lung deposited alveolar surface area (resulting from exposure to ultrafine particles) in a major avenue leading to the town center during late spring, as well as in indoor buildings facing it. Data revealed differentiated patterns for week days and weekends, consistent with PM2.5 and PM10 patterns currently monitored by air quality stations in Lisbon. The observed ultrafine particulate levels may be directly correlated with fluxes in automobile traffic. During a typical week, amounts of ultrafine particles per alveolar deposited surface area varied between 35 and 89.2 μm2/cm3, which are comparable with levels reported for other towns in Germany and the United States. The measured values allowed for determination of the number of ultrafine particles per cubic centimeter, which are comparable to levels reported for Madrid and Brisbane. In what concerns outdoor/indoor levels, we observed higher levels (32 to 63%) outdoors, which is somewhat lower than levels observed in houses in Ontario.
Resumo:
This chapter addresses the resolution of scheduling in manufacturing systems subject to perturbations. The planning of Manufacturing Systems involves frequently the resolution of a huge amount and variety of combinatorial optimisation problems with an important impact on the performance of manufacturing organisations. Examples of those problems are the sequencing and scheduling problems in manufacturing management, routing and transportation, layout design and timetabling problems.
Resumo:
This paper presents the proposal of an architecture for developing systems that interact with Ambient Intelligence (AmI) environments. This architecture has been proposed as a consequence of a methodology for the inclusion of Artificial Intelligence in AmI environments (ISyRAmI - Intelligent Systems Research for Ambient Intelligence). The ISyRAmI architecture considers several modules. The first is related with the acquisition of data, information and even knowledge. This data/information knowledge deals with our AmI environment and can be acquired in different ways (from raw sensors, from the web, from experts). The second module is related with the storage, conversion, and handling of the data/information knowledge. It is understood that incorrectness, incompleteness, and uncertainty are present in the data/information/knowledge. The third module is related with the intelligent operation on the data/information/knowledge of our AmI environment. Here we include knowledge discovery systems, expert systems, planning, multi-agent systems, simulation, optimization, etc. The last module is related with the actuation in the AmI environment, by means of automation, robots, intelligent agents and users.
Resumo:
II European Conference on Curriculum Studies. "Curriculum studies: Policies, perspectives and practices”. Porto, FPCEUP, October 16th - 17th.
Resumo:
Tese de Doutoramento, Ciências do Mar, especialidade de Biologia Marinha, 18 de Dezembro de 2015, Universidade dos Açores.
Resumo:
Nanotechnology is an important emerging industry with a projected annual market of around one trillion dollars by 2015. It involves the control of atoms and molecules to create new materials with a variety of useful functions. Although there are advantages on the utilization of these nano-scale materials, questions related with its impact over the environment and human health must be addressed too, so that potential risks can be limited at early stages of development. At this time, occupational health risks associated with manufacturing and use of nanoparticles are not yet clearly understood. However, workers may be exposed to nanoparticles through inhalation at levels that can greatly exceed ambient concentrations. Current workplace exposure limits are based on particle mass, but this criteria could not be adequate in this case as nanoparticles are characterized by very large surface area, which has been pointed out as the distinctive characteristic that could even turn out an inert substance into another substance exhibiting very different interactions with biological fluids and cells. Therefore, it seems that, when assessing human exposure based on the mass concentration of particles, which is widely adopted for particles over 1 μm, would not work in this particular case. In fact, nanoparticles have far more surface area for the equivalent mass of larger particles, which increases the chance they may react with body tissues. Thus, it has been claimed that surface area should be used for nanoparticle exposure and dosing. As a result, assessing exposure based on the measurement of particle surface area is of increasing interest. It is well known that lung deposition is the most efficient way for airborne particles to enter the body and cause adverse health effects. If nanoparticles can deposit in the lung and remain there, have an active surface chemistry and interact with the body, then, there is potential for exposure. It was showed that surface area plays an important role in the toxicity of nanoparticles and this is the metric that best correlates with particle-induced adverse health effects. The potential for adverse health effects seems to be directly proportional to particle surface area. The objective of the study is to identify and validate methods and tools for measuring nanoparticles during production, manipulation and use of nanomaterials.
Resumo:
Independent component analysis (ICA) has recently been proposed as a tool to unmix hyperspectral data. ICA is founded on two assumptions: 1) the observed spectrum vector is a linear mixture of the constituent spectra (endmember spectra) weighted by the correspondent abundance fractions (sources); 2)sources are statistically independent. Independent factor analysis (IFA) extends ICA to linear mixtures of independent sources immersed in noise. Concerning hyperspectral data, the first assumption is valid whenever the multiple scattering among the distinct constituent substances (endmembers) is negligible, and the surface is partitioned according to the fractional abundances. The second assumption, however, is violated, since the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be statistically independent, this compromising the performance of ICA/IFA algorithms in hyperspectral unmixing. This paper studies the impact of hyperspectral source statistical dependence on ICA and IFA performances. We conclude that the accuracy of these methods tends to improve with the increase of the signature variability, of the number of endmembers, and of the signal-to-noise ratio. In any case, there are always endmembers incorrectly unmixed. We arrive to this conclusion by minimizing the mutual information of simulated and real hyperspectral mixtures. The computation of mutual information is based on fitting mixtures of Gaussians to the observed data. A method to sort ICA and IFA estimates in terms of the likelihood of being correctly unmixed is proposed.
Resumo:
Linear unmixing decomposes a hyperspectral image into a collection of reflectance spectra of the materials present in the scene, called endmember signatures, and the corresponding abundance fractions at each pixel in a spatial area of interest. This paper introduces a new unmixing method, called Dependent Component Analysis (DECA), which overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical properties of hyperspectral data. DECA models the abundance fractions as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. The performance of the method is illustrated using simulated and real data.
Resumo:
Chapter in Book Proceedings with Peer Review First Iberian Conference, IbPRIA 2003, Puerto de Andratx, Mallorca, Spain, JUne 4-6, 2003. Proceedings
Resumo:
Chapter in Book Proceedings with Peer Review First Iberian Conference, IbPRIA 2003, Puerto de Andratx, Mallorca, Spain, JUne 4-6, 2003. Proceedings
Resumo:
Given a set of mixed spectral (multispectral or hyperspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method.