18 resultados para Millionaire Problem, Efficiency, Verifiability, Zero Test, Batch Equation
em AMS Tesi di Dottorato - Alm@DL - Universit
Resumo:
In this thesis the analysis to reconstruct the transverse momentum p_{t} spectra for pions, kaons and protons identified with the TOF detector of the ALICE experiment in pp Minimum Bias collisions at $\sqrt{s}=7$ TeV was reported.
After a detailed description of all the parameters which influence the TOF PID performance (time resolution, calibration, alignment, matching efficiency, time-zero of the event) the method used to identify the particles, the unfolding procedure, was discussed. With this method, thanks also to the excellent TOF performance, the pion and kaon spectra can be reconstructed in the 0.5
Resumo:
This work aims to develop a neurogeometric model of stereo vision, based on cortical architectures involved in the problem of 3D perception and neural mechanisms generated by retinal disparities. First, we provide a sub-Riemannian geometry for stereo vision, inspired by the work on the stereo problem by Zucker (2006), and using sub-Riemannian tools introduced by Citti-Sarti (2006) for monocular vision. We present a mathematical interpretation of the neural mechanisms underlying the behavior of binocular cells, that integrate monocular inputs. The natural compatibility between stereo geometry and neurophysiological models shows that these binocular cells are sensitive to position and orientation. Therefore, we model their action in the space R3xS2 equipped with a sub-Riemannian metric. Integral curves of the sub-Riemannian structure model neural connectivity and can be related to the 3D analog of the psychophysical association fields for the 3D process of regular contour formation. Then, we identify 3D perceptual units in the visual scene: they emerge as a consequence of the random cortico-cortical connection of binocular cells. Considering an opportune stochastic version of the integral curves, we generate a family of kernels. These kernels represent the probability of interaction between binocular cells, and they are implemented as facilitation patterns to define the evolution in time of neural population activity at a point. This activity is usually modeled through a mean field equation: steady stable solutions lead to consider the associated eigenvalue problem. We show that three-dimensional perceptual units naturally arise from the discrete version of the eigenvalue problem associated to the integro-differential equation of the population activity.
Resumo:
Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.
Resumo:
My aim is to develop a theory of cooperation within the organization and empirically test it. Drawing upon social exchange theory, social identity theory, the idea of collective intentions, and social constructivism, the main assumption of my work implies that both cooperation and the organization itself are continually shaped and restructured by actions, judgments, and symbolic interpretations of the parties involved. Therefore, I propose that the decision to cooperate, expressed say as an intention to cooperate, reflects and depends on a three step social process shaped by the interpretations of the actors involved. The first step entails an instrumental evaluation of cooperation in terms of social exchange. In the second step, this “social calculus” is translated into cognitive, emotional and evaluative reactions directed toward the organization. Finally, once the identification process is completed and membership awareness is established, I propose that individuals will start to think largely in terms of “We” instead of “I”. Self-goals are redefined at the collective level, and the outcomes for self, others, and the organization become practically interchangeable. I decided to apply my theory to an important cooperative problem in management research: knowledge exchange within organizations. Hence, I conducted a quantitative survey among the members of the virtual community, “www.borse.it” (n=108). Within this community, members freely decide to exchange their knowledge about the stock market among themselves. Because of the confirmatory requirements and the structural complexity of the theory proposed (i.e., the proposal that instrumental evaluations will induce social identity and this in turn will causes collective intentions), I use Structural Equation Modeling to test all hypotheses in this dissertation. The empirical survey-based study found support for the theory of cooperation proposed in this dissertation. The findings suggest that an appropriate conceptualization of the decision to exchange knowledge is one where collective intentions depend proximally on social identity (i.e., cognitive identification, affective commitment, and evaluative engagement) with the organization, and this identity depends on instrumental evaluations of cooperators (i.e., perceived value of the knowledge received, assessment of past reciprocity, expected reciprocity, and expected social outcomes of the exchange). Furthermore, I find that social identity fully mediates the effects of instrumental motives on collective intentions.
Resumo:
Nowadays, it is clear that the target of creating a sustainable future for the next generations requires to re-think the industrial application of chemistry. It is also evident that more sustainable chemical processes may be economically convenient, in comparison with the conventional ones, because fewer by-products means lower costs for raw materials, for separation and for disposal treatments; but also it implies an increase of productivity and, as a consequence, smaller reactors can be used. In addition, an indirect gain could derive from the better public image of the company, marketing sustainable products or processes. In this context, oxidation reactions play a major role, being the tool for the production of huge quantities of chemical intermediates and specialties. Potentially, the impact of these productions on the environment could have been much worse than it is, if a continuous efforts hadn’t been spent to improve the technologies employed. Substantial technological innovations have driven the development of new catalytic systems, the improvement of reactions and process technologies, contributing to move the chemical industry in the direction of a more sustainable and ecological approach. The roadmap for the application of these concepts includes new synthetic strategies, alternative reactants, catalysts heterogenisation and innovative reactor configurations and process design. Actually, in order to implement all these ideas into real projects, the development of more efficient reactions is one primary target. Yield, selectivity and space-time yield are the right metrics for evaluating the reaction efficiency. In the case of catalytic selective oxidation, the control of selectivity has always been the principal issue, because the formation of total oxidation products (carbon oxides) is thermodynamically more favoured than the formation of the desired, partially oxidized compound. As a matter of fact, only in few oxidation reactions a total, or close to total, conversion is achieved, and usually the selectivity is limited by the formation of by-products or co-products, that often implies unfavourable process economics; moreover, sometimes the cost of the oxidant further penalizes the process. During my PhD work, I have investigated four reactions that are emblematic of the new approaches used in the chemical industry. In the Part A of my thesis, a new process aimed at a more sustainable production of menadione (vitamin K3) is described. The “greener” approach includes the use of hydrogen peroxide in place of chromate (from a stoichiometric oxidation to a catalytic oxidation), also avoiding the production of dangerous waste. Moreover, I have studied the possibility of using an heterogeneous catalytic system, able to efficiently activate hydrogen peroxide. Indeed, the overall process would be carried out in two different steps: the first is the methylation of 1-naphthol with methanol to yield 2-methyl-1-naphthol, the second one is the oxidation of the latter compound to menadione. The catalyst for this latter step, the reaction object of my investigation, consists of Nb2O5-SiO2 prepared with the sol-gel technique. The catalytic tests were first carried out under conditions that simulate the in-situ generation of hydrogen peroxide, that means using a low concentration of the oxidant. Then, experiments were carried out using higher hydrogen peroxide concentration. The study of the reaction mechanism was fundamental to get indications about the best operative conditions, and improve the selectivity to menadione. In the Part B, I explored the direct oxidation of benzene to phenol with hydrogen peroxide. The industrial process for phenol is the oxidation of cumene with oxygen, that also co-produces acetone. This can be considered a case of how economics could drive the sustainability issue; in fact, the new process allowing to obtain directly phenol, besides avoiding the co-production of acetone (a burden for phenol, because the market requirements for the two products are quite different), might be economically convenient with respect to the conventional process, if a high selectivity to phenol were obtained. Titanium silicalite-1 (TS-1) is the catalyst chosen for this reaction. Comparing the reactivity results obtained with some TS-1 samples having different chemical-physical properties, and analyzing in detail the effect of the more important reaction parameters, we could formulate some hypothesis concerning the reaction network and mechanism. Part C of my thesis deals with the hydroxylation of phenol to hydroquinone and catechol. This reaction is already industrially applied but, for economical reason, an improvement of the selectivity to the para di-hydroxilated compound and a decrease of the selectivity to the ortho isomer would be desirable. Also in this case, the catalyst used was the TS-1. The aim of my research was to find out a method to control the selectivity ratio between the two isomers, and finally to make the industrial process more flexible, in order to adapt the process performance in function of fluctuations of the market requirements. The reaction was carried out in both a batch stirred reactor and in a re-circulating fixed-bed reactor. In the first system, the effect of various reaction parameters on catalytic behaviour was investigated: type of solvent or co-solvent, and particle size. With the second reactor type, I investigated the possibility to use a continuous system, and the catalyst shaped in extrudates (instead of powder), in order to avoid the catalyst filtration step. Finally, part D deals with the study of a new process for the valorisation of glycerol, by means of transformation into valuable chemicals. This molecule is nowadays produced in big amount, being a co-product in biodiesel synthesis; therefore, it is considered a raw material from renewable resources (a bio-platform molecule). Initially, we tested the oxidation of glycerol in the liquid-phase, with hydrogen peroxide and TS-1. However, results achieved were not satisfactory. Then we investigated the gas-phase transformation of glycerol into acrylic acid, with the intermediate formation of acrolein; the latter can be obtained by dehydration of glycerol, and then can be oxidized into acrylic acid. Actually, the oxidation step from acrolein to acrylic acid is already optimized at an industrial level; therefore, we decided to investigate in depth the first step of the process. I studied the reactivity of heterogeneous acid catalysts based on sulphated zirconia. Tests were carried out both in aerobic and anaerobic conditions, in order to investigate the effect of oxygen on the catalyst deactivation rate (one main problem usually met in glycerol dehydration). Finally, I studied the reactivity of bifunctional systems, made of Keggin-type polyoxometalates, either alone or supported over sulphated zirconia, in this way combining the acid functionality (necessary for the dehydrative step) with the redox one (necessary for the oxidative step). In conclusion, during my PhD work I investigated reactions that apply the “green chemistry” rules and strategies; in particular, I studied new greener approaches for the synthesis of chemicals (Part A and Part B), the optimisation of reaction parameters to make the oxidation process more flexible (Part C), and the use of a bioplatform molecule for the synthesis of a chemical intermediate (Part D).
Resumo:
Management and organization literature has extensively noticed the crucial role that improvisation assumes in organizations, both as a learning process (Miner, Bassoff & Moorman, 2001), a creative process (Fisher & Amabile, 2008), a capability (Vera & Crossan, 2005), and a personal disposition (Hmielesky & Corbett, 2006; 2008). My dissertation aims to contribute to the existing literature on improvisation, addressing two general research questions: 1) How does improvisation unfold at an individual level? 2) What are the potential antecedents and consequences of individual proclivity to improvise? This dissertation is based on a mixed methodology that allowed me to deal with these two general research questions and enabled a constant interaction between the theoretical framework and the empirical results. The selected empirical field is haute cuisine and the respondents are the executive chefs of the restaurants awarded by Michelin Guide in 2010 in Italy. The qualitative section of the dissertation is based on the analysis of 26 inductive case studies and offers a multifaceted contribution. First, I describe how improvisation works both as a learning and creative process. Second, I introduce a new categorization of individual improvisational scenarios (demanded creative improvisation, problem solving improvisation, and pure creative improvisation). Third, I describe the differences between improvisation and other creative processes detected in the field (experimentation, brainstorming, trial and error through analytical procedure, trial and error, and imagination). The quantitative inquiry is founded on a Structural Equation Model, which allowed me to test simultaneously the relationships between proclivity to improvise and its antecedents and consequences. In particular, using a newly developed scale to measure individual proclivity to improvise, I test the positive influence of industry experience, self-efficacy, and age on proclivity to improvise and the negative impact of proclivity to improvise on outcome deviation. Theoretical contributions and practical implications of the results are discussed.
Resumo:
BTES (borehole thermal energy storage)systems exchange thermal energy by conduction with the surrounding ground through borehole materials. The spatial variability of the geological properties and the space-time variability of hydrogeological conditions affect the real power rate of heat exchangers and, consequently, the amount of energy extracted from / injected into the ground. For this reason, it is not an easy task to identify the underground thermal properties to use when designing. At the current state of technology, Thermal Response Test (TRT) is the in situ test for the characterization of ground thermal properties with the higher degree of accuracy, but it doesn’t fully solve the problem of characterizing the thermal properties of a shallow geothermal reservoir, simply because it characterizes only the neighborhood of the heat exchanger at hand and only for the test duration. Different analytical and numerical models exist for the characterization of shallow geothermal reservoir, but they are still inadequate and not exhaustive: more sophisticated models must be taken into account and a geostatistical approach is needed to tackle natural variability and estimates uncertainty. The approach adopted for reservoir characterization is the “inverse problem”, typical of oil&gas field analysis. Similarly, we create different realizations of thermal properties by direct sequential simulation and we find the best one fitting real production data (fluid temperature along time). The software used to develop heat production simulation is FEFLOW 5.4 (Finite Element subsurface FLOW system). A geostatistical reservoir model has been set up based on literature thermal properties data and spatial variability hypotheses, and a real TRT has been tested. Then we analyzed and used as well two other codes (SA-Geotherm and FV-Geotherm) which are two implementation of the same numerical model of FEFLOW (Al-Khoury model).
Resumo:
The work presented in this thesis is focused on the open-ended coaxial-probe frequency-domain reflectometry technique for complex permittivity measurement at microwave frequencies of dispersive dielectric multilayer materials. An effective dielectric model is introduced and validated to extend the applicability of this technique to multilayer materials in on-line system context. In addition, the thesis presents: 1) a numerical study regarding the imperfectness of the contact at the probe-material interface, 2) a review of the available models and techniques, 3) a new classification of the extraction schemes with guidelines on how they can be used to improve the overall performance of the probe according to the problem requirements.
Resumo:
Microalgae are sun - light cell factories that convert carbon dioxide to biofuels, foods, feeds, and other bioproducts. The concept of microalgae cultivation as an integrated system in wastewater treatment has optimized the potential of the microalgae - based biofuel production. These microorganisms contains lipids, polysaccharides, proteins, pigments and other cell compounds, and their biomass can provide different kinds of biofuels such as biodiesel, biomethane and ethanol. The algal biomass application strongly depends on the cell composition and the production of biofuels appears to be economically convenient only in conjunction with wastewater treatment. The aim of this research thesis was to investigate a biological wastewater system on a laboratory scale growing a newly isolated freshwater microalgae, Desmodesmus communis, in effluents generated by a local wastewater reclamation facility in Cesena (Emilia Romagna, Italy) in batch and semi - continuous cultures. This work showed the potential utilization of this microorganism in an algae - based wastewater treatment; Desmodesmus communis had a great capacity to grow in the wastewater, competing with other microorganisms naturally present and adapting to various environmental conditions such as different irradiance levels and nutrient concentrations. The nutrient removal efficiency was characterized at different hydraulic retention times as well as the algal growth rate and biomass composition in terms of proteins, polysaccharides, total lipids and total fatty acids (TFAs) which are considered the substrate for biodiesel production. The biochemical analyses were coupled with the biomass elemental analysis which specified the amount of carbon and nitrogen in the algal biomass. Furthermore photosynthetic investigations were carried out to better correlate the environmental conditions with the physiology responses of the cells and consequently get more information to optimize the growth rate and the increase of TFAs and C/N ratio, cellular compounds and biomass parameter which are fundamental in the biomass energy recovery.
Resumo:
Quality control of medical radiological systems is of fundamental importance, and requires efficient methods for accurately determine the X-ray source spectrum. Straightforward measurements of X-ray spectra in standard operating require the limitation of the high photon flux, and therefore the measure has to be performed in a laboratory. However, the optimal quality control requires frequent in situ measurements which can be only performed using a portable system. To reduce the photon flux by 3 magnitude orders an indirect technique based on the scattering of the X-ray source beam by a solid target is used. The measured spectrum presents a lack of information because of transport and detection effects. The solution is then unfolded by solving the matrix equation that represents formally the scattering problem. However, the algebraic system is ill-conditioned and, therefore, it is not possible to obtain a satisfactory solution. Special strategies are necessary to circumvent the ill-conditioning. Numerous attempts have been done to solve this problem by using purely mathematical methods. In this thesis, a more physical point of view is adopted. The proposed method uses both the forward and the adjoint solutions of the Boltzmann transport equation to generate a better conditioned linear algebraic system. The procedure has been tested first on numerical experiments, giving excellent results. Then, the method has been verified with experimental measurements performed at the Operational Unit of Health Physics of the University of Bologna. The reconstructed spectra have been compared with the ones obtained with straightforward measurements, showing very good agreement.
Resumo:
Reliable electronic systems, namely a set of reliable electronic devices connected to each other and working correctly together for the same functionality, represent an essential ingredient for the large-scale commercial implementation of any technological advancement. Microelectronics technologies and new powerful integrated circuits provide noticeable improvements in performance and cost-effectiveness, and allow introducing electronic systems in increasingly diversified contexts. On the other hand, opening of new fields of application leads to new, unexplored reliability issues. The development of semiconductor device and electrical models (such as the well known SPICE models) able to describe the electrical behavior of devices and circuits, is a useful means to simulate and analyze the functionality of new electronic architectures and new technologies. Moreover, it represents an effective way to point out the reliability issues due to the employment of advanced electronic systems in new application contexts. In this thesis modeling and design of both advanced reliable circuits for general-purpose applications and devices for energy efficiency are considered. More in details, the following activities have been carried out: first, reliability issues in terms of security of standard communication protocols in wireless sensor networks are discussed. A new communication protocol is introduced, allows increasing the network security. Second, a novel scheme for the on-die measurement of either clock jitter or process parameter variations is proposed. The developed scheme can be used for an evaluation of both jitter and process parameter variations at low costs. Then, reliability issues in the field of “energy scavenging systems” have been analyzed. An accurate analysis and modeling of the effects of faults affecting circuit for energy harvesting from mechanical vibrations is performed. Finally, the problem of modeling the electrical and thermal behavior of photovoltaic (PV) cells under hot-spot condition is addressed with the development of an electrical and thermal model.
Resumo:
Finite element techniques for solving the problem of fluid-structure interaction of an elastic solid material in a laminar incompressible viscous flow are described. The mathematical problem consists of the Navier-Stokes equations in the Arbitrary Lagrangian-Eulerian formulation coupled with a non-linear structure model, considering the problem as one continuum. The coupling between the structure and the fluid is enforced inside a monolithic framework which computes simultaneously for the fluid and the structure unknowns within a unique solver. We used the well-known Crouzeix-Raviart finite element pair for discretization in space and the method of lines for discretization in time. A stability result using the Backward-Euler time-stepping scheme for both fluid and solid part and the finite element method for the space discretization has been proved. The resulting linear system has been solved by multilevel domain decomposition techniques. Our strategy is to solve several local subproblems over subdomain patches using the Schur-complement or GMRES smoother within a multigrid iterative solver. For validation and evaluation of the accuracy of the proposed methodology, we present corresponding results for a set of two FSI benchmark configurations which describe the self-induced elastic deformation of a beam attached to a cylinder in a laminar channel flow, allowing stationary as well as periodically oscillating deformations, and for a benchmark proposed by COMSOL multiphysics where a narrow vertical structure attached to the bottom wall of a channel bends under the force due to both viscous drag and pressure. Then, as an example of fluid-structure interaction in biomedical problems, we considered the academic numerical test which consists in simulating the pressure wave propagation through a straight compliant vessel. All the tests show the applicability and the numerical efficiency of our approach to both two-dimensional and three-dimensional problems.
Resumo:
La caratterizzazione di sedimenti contaminati è un problema complesso, in questo lavoro ci si è occupati di individuare una metodologia di caratterizzazione che tenesse conto sia delle caratteristiche della contaminazione, con analisi volte a determinare il contenuto totale di contaminanti, sia della mobilità degli inquinanti stessi. Una adeguata strategia di caratterizzazione può essere applicata per la valutazione di trattamenti di bonifica, a questo scopo si è valutato il trattamento di soil washing, andando ad indagare le caratteristiche dei sedimenti dragati e del materiale in uscita dal processo, sabbie e frazione fine, andando inoltre a confrontare le caratteristiche della sabbia in uscita con quelle delle sabbie comunemente usate per diverse applicazioni. Si è ritenuto necessario indagare la compatibilità dal punto di vista chimico, granulometrico e morfologico. Per indagare la mobilità si è scelto di applicare i test di cessione definiti sia a livello internazionale che italiano (UNI) e quindi si sono sviluppate le tecnologie necessarie alla effettuazione di test di cessione in modo efficace, automatizzando la gestione del test a pHstat UNI CEN 14997. Questo si è reso necessario a causa della difficoltà di gestire il test manualmente, per via delle tempistiche difficilmente attuabili da parte di un operatore. Le condizioni redox influenzano la mobilità degli inquinanti, in particolare l’invecchiamento all’aria di sedimenti anossici provoca variazioni sensibili nello stato d’ossidazione di alcune componenti, incrementandone la mobilità, si tratta quindi di un aspetto da considerare quando si individuano le adeguate condizioni di stoccaggio-smaltimento, si è eseguita a questo scopo una campagna sperimentale.
Resumo:
The main areas of research of this thesis are Interference Management and Link-Level Power Efficiency for Satellite Communications. The thesis is divided in two parts. Part I tackles the problem of interference environments in satellite communications, and interference mitigation strategies, not just in terms of avoidance of the interferers, but also in terms of actually exploiting the interference present in the system as a useful signal. The analysis follows a top-down approach across different levels of investigation, starting from system level consideration on interference management, down to link-level aspects and to intra-receiver design. Interference Management techniques are proposed at all the levels of investigation, with interesting results. Part II is related to efficiency in the power domain, for instance in terms of required Input Back-off at the power amplifiers, which can be an issue for waveform based on linear modulations, due to their varying envelope. To cope with such aspects, an analysis is carried out to compare linear modulation with waveforms based on constant envelope modulations. It is shown that in some scenarios, constant envelope waveforms, even if at lower spectral efficiency, outperform linear modulation waveform in terms of energy efficiency.
Resumo:
Precision Agriculture (PA) and the more specific branch of Precision Horticulture are two very promising sectors. They focus on the use of technologies in agriculture to optimize the use of inputs, so to reach a better efficiency, and minimize waste of resources. This important objective motivated many researchers and companies to search new technology solutions. Sometimes the effort proved to be a good seed, but sometimes an unfeasible idea. So that PA, from its birth more or less 25 years ago, is still a “new” management, interesting for the future, but an actual low adoption rate is still reported by experts and researchers. This work aims to give a contribution in finding the causes of this low adoption rate and proposing a methodological solution to this problem. The first step was to examine prior research about Precision Agriculture adoption, by ex ante and ex post approach. It was supposed as important to find connections between these two phases of a purchase experience. In fact, the ex ante studies dealt with potential consumer’s perceptions before a usage experience occurred, therefore before purchasing a technology, while the ex post studies described the drivers which made a farmer become an end-user of PA technology. Then, an example of consumer research is presented. This was an ex ante research focused on pre-prototype technology for fruit production. This kind of research could give precious information about consumer acceptance before reaching an advanced development phase of the technology, and so to have the possibility to change something with the least financial impact. The final step was to develop the pre-prototype technology that was the subject of the consumer acceptance research and test its technical characteristics.