938 resultados para methods of resolution enhancement
Resumo:
The purpose of this study is to analyze the existing literature on hospitality management from all the research papers published in The International Journal of Hospitality Management (IJHM) between 2008 and 2014. The authors apply bibliometric methods – in particular, author citation and co-citation analyses (ACA) – to identify the main research lines within this scientific field; in other words, its ‘intellectual structure’. Social network analysis (SNA) is also used to perform a visualization of this structure. The results of the analysis allow us to define the different research lines or fronts which shape the intellectual structure of research on hospitality management.
Resumo:
The South Carolina Department of Transportation routinely retains Professional Consulting Engineering firms to provide engineering design and related professional services for the preparation of construction plans or design-build Request for Proposal bid packages for a wide variety of Federal-aid Highway Program roadway and bridge construction projects throughout South Carolina.The purpose of this project is to examine the current process of determining a "Fair and Reasonable" fixed fee for professional service contracts and to evaluate possible alternative methods including practices in other states that may improve the process, particularly in light of the considerable variation in audited overhead rates among consulting firms. In reviewing such alternative methods particular attention will be given to evaluating the potential impact of the method as an incentive to consulting firms to effectively manage their overhead costs.
Resumo:
In Angola, the construction made of raw earth is a cultural heritage widely used by low income households, representing over 80% of the population [1, 3]. In Huila province is evident construction in raw earth in a large scale, either in urban or in periurban and rural areas. The construction methods follow the ancestral standards, distributed throughout the region of Huila, being built by the several ethnic groups. Among the construction techniques in earth, stand out: the adobe, wattle-and-daub and more recently on CEB (Compressed Earth Block). The type of soil used to make the adobes is mainly silty-clayed sand [1]. The most applied materials are: rods, reeds, wood, grass, straw, soil and stone, almost with the same characteristics [2]. The manufacture of adobe, consists essentially in mixing clay and grass (plant fibers), then put the mixture inside a wooden mold, having a size of 42 cm long and 18 cm high and taking three to four days to dry and be applied in housing construction. The application of these materials makes the construction less expensive because they are collected, transformed and applied by the owner himself of housing without any project, based only on the result of the practice and experience acquired from their ancestors. They are simple constructions, presenting a typology of grouped and isolated single-family housing, ranging between 2 and 3 bedrooms [2]. The construction techniques used in such small housings have positive environmental aspects, both as regards the materials employed, such as the manner in which the constructions are raised, showing special concerns for the quality improvement of them, as regards the resistance, durability and comfort [4].
Comparison of Explicit and Implicit Methods of Cross-Cultural Learning in an International Classroom
Resumo:
The paper addresses a gap in the literature concerning the difference between enhanced and not enhanced cross-cultural learning in an international classroom. The objective of the described research was to clarify if the environment of international classrooms could enhance cross-cultural competences significantly enough or if additional focus on cross-cultural learning as an explicit objective of learning activities would add substantially to the experience. The research question was defined as “how can a specific exercise focused on cross-cultural learning enhance the cross-cultural skills of university students in an international classroom?”. Surveys were conducted among interna- tional students in three leading Central-European Universities in Lithuania, Poland and Hungary to measure the increase of their cross-cultural competences. The Lithuanian and Polish classes were composed of international students and concentrated on International Management/Business topics (explicit method). The Hungarian survey was done in a general business class that just happened to be international in its composition (implicit method). Overall, our findings prove that the implicit method resulted in comparable, somewhat even stronger effectiveness than the explicit method. The study method included the analyses of students’ individual increases in each study dimension and construction of a compound measure to note the overall results. Our findings confirm the power of the international classroom as a stimulating environment for latent cross-cultural learning even without specific exercises focused on cross-cultural learning itself. However, the specific exercise did induce additional learning, especially related to cross-cultural awareness and communication with representatives of other cultures, even though the extent of that learning may be interpreted as underwhelming. The main conclusion from the study is that the diversity of the students engaged in a project provided an environment that supported cross-cultural learning, even without specific culture-focused reflections or exercises.
Resumo:
Noise is constant presence in measurements. Its origin is related to the microscopic properties of matter. Since the seminal work of Brown in 1828, the study of stochastic processes has gained an increasing interest with the development of new mathematical and analytical tools. In the last decades, the central role that noise plays in chemical and physiological processes has become recognized. The dual role of noise as nuisance/resource pushes towards the development of new decomposition techniques that divide a signal into its deterministic and stochastic components. In this thesis I show how methods based on Singular Spectrum Analysis have the right properties to fulfil the previously mentioned requirement. During my work I applied SSA to different signals of interest in chemistry: I developed a novel iterative procedure for the denoising of powder X-ray diffractograms; I “denoised” bi-dimensional images from experiments of electrochemiluminescence imaging of micro-beads obtaining new insight on ECL mechanism. I also used Principal Component Analysis to investigate the relationship between brain electrophysiological signals and voice emission.
Resumo:
This research activity aims at providing a reliable estimation of particular state variables or parameters concerning the dynamics and performance optimization of a MotoGP-class motorcycle, integrating the classical model-based approach with new methodologies involving artificial intelligence. The first topic of the research focuses on the estimation of the thermal behavior of the MotoGP carbon braking system. Numerical tools are developed to assess the instantaneous surface temperature distribution in the motorcycle's front brake discs. Within this application other important brake parameters are identified using Kalman filters, such as the disc convection coefficient and the power distribution in the disc-pads contact region. Subsequently, a physical model of the brake is built to estimate the instantaneous braking torque. However, the results obtained with this approach are highly limited by the knowledge of the friction coefficient (μ) between the disc rotor and the pads. Since the value of μ is a highly nonlinear function of many variables (namely temperature, pressure and angular velocity of the disc), an analytical model for the friction coefficient estimation appears impractical to establish. To overcome this challenge, an innovative hybrid solution is implemented, combining the benefit of artificial intelligence (AI) with classical model-based approach. Indeed, the disc temperature estimated through the thermal model previously implemented is processed by a machine learning algorithm that outputs the actual value of the friction coefficient thus improving the braking torque computation performed by the physical model of the brake. Finally, the last topic of this research activity regards the development of an AI algorithm to estimate the current sideslip angle of the motorcycle's front tire. While a single-track motorcycle kinematic model and IMU accelerometer signals theoretically enable sideslip calculation, the presence of accelerometer noise leads to a significant drift over time. To address this issue, a long short-term memory (LSTM) network is implemented.
Resumo:
In food and beverage industry, packaging plays a crucial role in protecting food and beverages and maintaining their organoleptic properties. Their disposal, unfortunately, is still difficult, mainly because there is a lack of economically viable systems for separating composite and multilayer materials. It is therefore necessary not only to increase research in this area, but also to set up pilot plants and implement these technologies on an industrial scale. LCA (Life Cycle Assessment) can fulfil these purposes. It allows an assessment of the potential environmental impacts associated with a product, service or process. The objective of this thesis work is to analyze the environmental performance of six separation methods, designed for separating the polymeric from the aluminum fraction in multilayered packaging. The first four methods utilize the chemical dissolution technique using Biodiesel, Cyclohexane, 2-Methyltetrahydrofuran (2-MeTHF) and Cyclopentyl-methyl-ether (CPME) as solvents. The last two applied the mechanical delamination technique with surfactant-activated water, using Ammonium laurate and Triethanolamine laurate as surfactants, respectively. For all six methods, the LCA methodology was applied and the corresponding models were built with the GaBi software version 10.6.2.9, specifically for LCA analyses. Unfortunately, due to a lack of data, it was not possible to obtain the results of the dissolution methods with the solvents 2-MeTHF and CPME; for the other methods, however, the individual environmental performances were calculated. Results revealed that the methods with the best environmental performance are method 2, for dissolution methods, and method 5, for delamination methods. This result is confirmed both by the analysis of normalized and weighted results and by the analysis of 'original' results. An hotspots analysis was also conducted.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The vehicle routing problem is to nd a better route to meet a set of customers who are geographically dispersed using vehicles that are a central repository to which they return after serving customers. These customers have a demand that must be met. Such problems have a wide practical application among them we can mention: school transport, distribution of newspapers, garbage collection, among others. Because it is a classic problem as NP-hard, these problems have aroused interest in the search for viable methods of resolution. In this paper we use the Genetic Algorithm as a resolution
Resumo:
Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.
Resumo:
Context. It is debated whether the Milky Way bulge has characteristics more similar to those of a classical bulge than those of a pseudobulge. Detailed abundance studies of bulge stars are important when investigating the origin, history, and classification of the bulge. These studies provide constraints on the star-formation history, initial mass function, and differences between stellar populations. Not many similar studies have been completed because of the large distance and high variable visual extinction along the line-of-sight towards the bulge. Therefore, near-IR investigations can provide superior results. Aims. To investigate the origin of the bulge and study its chemical abundances determined from near-IR spectra for bulge giants that have already been investigated with optical spectra. The optical spectra also provide the stellar parameters that are very important to the present study. In particular, the important CNO elements are determined more accurately in the near-IR. Oxygen and other alpha elements are important for investigating the star-formation history. The C and N abundances are important for determining the evolutionary stage of the giants and the origin of C in the bulge. Methods. High-resolution, near-infrared spectra in the H band were recorded using the CRIRES spectrometer mounted on the Very Large Telescope. The CNO abundances are determined from the numerous molecular lines in the wavelength range observed. Abundances of the alpha elements Si, S, and Ti are also determined from the near-IR spectra. Results. The abundance ratios [O/Fe], [Si/Fe], and [S/Fe] are enhanced to metallicities of at least [Fe/H] = -0.3, after which they decline. This suggests that the Milky Way bulge experienced a rapid and early burst of star formation similar to that of a classical bulge. However, a similarity between the bulge trend and the trend of the local thick disk seems to be present. This similarity suggests that the bulge could have had a pseudobulge origin. The C and N abundances suggest that our giants are first-ascent red-giants or clump stars, and that the measured oxygen abundances are those with which the stars were born. Our [C/Fe] trend does not show any increase with [Fe/H], which is expected if W-R stars contributed substantially to the C abundances. No ""cosmic scatter"" can be traced around our observed abundance trends: the measured scatter is expected, given the observational uncertainties.
Resumo:
We report here the observation, for the first time, of the enhancement of Europium-Tetracycline complex emission in cholesterol solutions. This enhancement was initially observed with the addition of the enzyme cholesterol oxidase, which produces H2O2, the agent driver of the Europium tetracycline complex, to the solution. However, it was found that the enzyme is not needed to enhance the luminescence. A calibration curve was determined, resulting in a simple method to measure the cholesterol quantity in a solution. This method shows that the complex can be used as a sensor to determine cholesterol in biological systems.
Resumo:
Clinically childhood occipital lobe epilepsy (OLE) manifests itself with distinct syndromes. The traditional EEG recordings have not been able to overcome the difficulty in correlating the ictal clinical symptoms to the onset in particular areas of the occipital lobes. To understand these syndromes it is important to map with more precision the epileptogenic cortical regions in OLE. Experimentally, we studied three idiopathic childhood OLE patients with EEG source analysis and with the simultaneous acquisition of EEG and fMRI, to map the BOLD effect associated with EEG spikes. The spatial overlap between the EEG and BOLD results was not very good, but the fMRI suggested localizations more consistent with the ictal clinical manifestations of each type of epileptic syndrome. Since our first results show that by associating the BOLD effect with interictal spikes the epileptogenic areas are mapped to localizations different from those calculated from EEG sources and that by using different EEG/fMRI processing methods our results differ to some extent, it is very important to compare the different methods of processing the localization of activation and develop a good methodology for obtaining co-registration maps of high resolution EEG with BOLD localizations.
Resumo:
We present a method for segmenting white matter tracts from high angular resolution diffusion MR. images by representing the data in a 5 dimensional space of position and orientation. Whereas crossing fiber tracts cannot be separated in 3D position space, they clearly disentangle in 5D position-orientation space. The segmentation is done using a 5D level set method applied to hyper-surfaces evolving in 5D position-orientation space. In this paper we present a methodology for constructing the position-orientation space. We then show how to implement the standard level set method in such a non-Euclidean high dimensional space. The level set theory is basically defined for N-dimensions but there are several practical implementation details to consider, such as mean curvature. Finally, we will show results from a synthetic model and a few preliminary results on real data of a human brain acquired by high angular resolution diffusion MRI.
Resumo:
Abstract Accurate characterization of the spatial distribution of hydrological properties in heterogeneous aquifers at a range of scales is a key prerequisite for reliable modeling of subsurface contaminant transport, and is essential for designing effective and cost-efficient groundwater management and remediation strategies. To this end, high-resolution geophysical methods have shown significant potential to bridge a critical gap in subsurface resolution and coverage between traditional hydrological measurement techniques such as borehole log/core analyses and tracer or pumping tests. An important and still largely unresolved issue, however, is how to best quantitatively integrate geophysical data into a characterization study in order to estimate the spatial distribution of one or more pertinent hydrological parameters, thus improving hydrological predictions. Recognizing the importance of this issue, the aim of the research presented in this thesis was to first develop a strategy for the assimilation of several types of hydrogeophysical data having varying degrees of resolution, subsurface coverage, and sensitivity to the hydrologic parameter of interest. In this regard a novel simulated annealing (SA)-based conditional simulation approach was developed and then tested in its ability to generate realizations of porosity given crosshole ground-penetrating radar (GPR) and neutron porosity log data. This was done successfully for both synthetic and field data sets. A subsequent issue that needed to be addressed involved assessing the potential benefits and implications of the resulting porosity realizations in terms of groundwater flow and contaminant transport. This was investigated synthetically assuming first that the relationship between porosity and hydraulic conductivity was well-defined. Then, the relationship was itself investigated in the context of a calibration procedure using hypothetical tracer test data. Essentially, the relationship best predicting the observed tracer test measurements was determined given the geophysically derived porosity structure. Both of these investigations showed that the SA-based approach, in general, allows much more reliable hydrological predictions than other more elementary techniques considered. Further, the developed calibration procedure was seen to be very effective, even at the scale of tomographic resolution, for predictions of transport. This also held true at locations within the aquifer where only geophysical data were available. This is significant because the acquisition of hydrological tracer test measurements is clearly more complicated and expensive than the acquisition of geophysical measurements. Although the above methodologies were tested using porosity logs and GPR data, the findings are expected to remain valid for a large number of pertinent combinations of geophysical and borehole log data of comparable resolution and sensitivity to the hydrological target parameter. Moreover, the obtained results allow us to have confidence for future developments in integration methodologies for geophysical and hydrological data to improve the 3-D estimation of hydrological properties.