904 resultados para 100602 Input Output and Data Devices


Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the increasing importance that nanotechnologies have in everyday life, it is not difficult to realize that also a single molecule, if properly designed, can be a device able to perform useful functions: such a chemical species is called chemosensor, that is a molecule of abiotic origin that signals the presence of matter or energy. Signal transduction is the mechanism by which an interaction of a sensor with an analyte yields a measurable form of energy. When dealing with the design of a chemosensor, we need to take into account a “communication requirement” between its three component: the receptor unit, responsible for the selective analyte binding, the spacer, which controls the geometry of the system and modulates the electronic interaction between the receptor and the signalling unit, whose physico-chemical properties change upon complexation. A luminescent chemosensor communicates a variation of the physico-chemical properties of the receptor unit with a luminescence output signal. This thesis work consists in the characterization of new molecular and nanoparticle-based system which can be used as sensitive materials for the construction of new optical transduction devices able to provide information about the concentration of analytes in solution. In particular two direction were taken. The first is to continue in the development of new chemosensors, that is the first step for the construction of reliable and efficient devices, and in particular the work will be focused on chemosensors for metal ions for biomedical and environmental applications. The second is to study more efficient and complex organized systems, such as derivatized silica nanoparticles. These system can potentially have higher sensitivity than molecular systems, and present many advantages, like the possibility to be ratiometric, higher Stokes shifts and lower signal-to-noise ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study deals with the internationalization behavior of a new and specific type of e-business company, namely the network managing e-business company (NM-EBC). The business model of such e-business companies is based on providing a platform and applications for users to connect and interact, on gathering and channeling the inputs provided by the users, and on organizing and managing the cross-relationships of the various participants. Examples are online communities, matching platforms, and portals. Since NM-EBCs internationalize by replicating their business model in a foreign market and by building up and managing a network of users, who provide input themselves and interact with each other, they have to convince users in foreign markets to join the network and hence to adopt their platform. We draw upon Rogers’ Diffusion of Innovations Theory and Network Theory to explain the internationalization behavior of NM-EBCs. These two theories originate from neighboring disciplines and have not yet been used to explain the internationalization of firms. We combine both theories and formulate hypotheses about which strategies NM-EBCs may choose to expand abroad. To test the applicability of our theory and to gain rich data about the internationalization behavior of these firms, we carried out multiple case studies with internationally active Germany-based NM-EBCs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis analyses the hydrodynamic induced by an array of Wave energy Converters (WECs), under an experimental and numerical point of view. WECs can be considered an innovative solution able to contribute to the green energy supply and –at the same time– to protect the rear coastal area under marine spatial planning considerations. This research activity essentially rises due to this combined concept. The WEC under exam is a floating device belonging to the Wave Activated Bodies (WAB) class. Experimental data were performed at Aalborg University in different scales and layouts, and the performance of the models was analysed under a variety of irregular wave attacks. The numerical simulations performed with the codes MIKE 21 BW and ANSYS-AQWA. Experimental results were also used to calibrate the numerical parameters and/or to directly been compared to numerical results, in order to extend the experimental database. Results of the research activity are summarized in terms of device performance and guidelines for a future wave farm installation. The device length should be “tuned” based on the local climate conditions. The wave transmission behind the devices is pretty high, suggesting that the tested layout should be considered as a module of a wave farm installation. Indications on the minimum inter-distance among the devices are provided. Furthermore, a CALM mooring system leads to lower wave transmission and also larger power production than a spread mooring. The two numerical codes have different potentialities. The hydrodynamics around single and multiple devices is obtained with MIKE 21 BW, while wave loads and motions for a single moored device are derived from ANSYS-AQWA. Combining the experimental and numerical it is suggested –for both coastal protection and energy production– to adopt a staggered layout, which will maximise the devices density and minimize the marine space required for the installation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Holding the major share of stellar mass in galaxies and being also old and passively evolving, early-type galaxies (ETGs) are the primary probes in investigating these various evolution scenarios, as well as being useful means to provide insights on cosmological parameters. In this thesis work I focused specifically on ETGs and on their capability in constraining galaxy formation and evolution; in particular, the principal aims were to derive some of the ETGs evolutionary parameters, such as age, metallicity and star formation history (SFH) and to study their age-redshift and mass-age relations. In order to infer galaxy physical parameters, I used the public code STARLIGHT: this program provides a best fit to the observed spectrum from a combination of many theoretical models defined in user-made libraries. the comparison between the output and input light-weighted ages shows a good agreement starting from SNRs of ∼ 10, with a bias of ∼ 2.2% and a dispersion 3%. Furthermore, also metallicities and SFHs are well reproduced. In the second part of the thesis I performed an analysis on real data, starting from Sloan Digital Sky Survey (SDSS) spectra. I found that galaxies get older with cosmic time and with increasing mass (for a fixed redshift bin); absolute light-weighted ages, instead, result independent from the fitting parameters or the synthetic models used. Metallicities, instead, are very similar from each other and clearly consistent with the ones derived from the Lick indices. The predicted SFH indicates the presence of a double burst of star formation. Velocity dispersions and extinctiona are also well constrained, following the expected behaviours. As a further step, I also fitted single SDSS spectra (with SNR∼ 20), to verify that stacked spectra gave the same results without introducing any bias: this is an important check, if one wants to apply the method at higher z, where stacked spectra are necessary to increase the SNR. Our upcoming aim is to adopt this approach also on galaxy spectra obtained from higher redshift Surveys, such as BOSS (z ∼ 0.5), zCOSMOS (z 1), K20 (z ∼ 1), GMASS (z ∼ 1.5) and, eventually, Euclid (z 2). Indeed, I am currently carrying on a preliminary study to estabilish the applicability of the method to lower resolution, as well as higher redshift (z 2) spectra, just like the Euclid ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The activity carried out during my PhD was principally addressed to the development of portable microfluidic analytical devices based on biospecific molecular recognition reactions and CL detection. In particular, the development of biosensors required the study of different materials and procedures for their construction, with particular attention to the development of suitable immobilization procedures, fluidic systems and the selection of the suitable detectors. Different methods were exploited, such as gene probe hybridization assay or immunoassay, based on different platform (functionalized glass slide or nitrocellulose membrane) trying to improve the simplicity of the assay procedure. Different CL detectors were also employed and compared with each other in the search for the best compromise between portability and sensitivity. The work was therefore aimed at miniaturization and simplification of analytical devices and the study involved all aspects of the system, from the analytical methodology to the type of detector, in order to combine high sensitivity with easiness-of-use and rapidity. The latest development involving the use of smartphone as chemiluminescent detector paves the way for a new generation of analytical devices in the clinical diagnostic field thanks to the ideal combination of sensibility a simplicity of the CL with the day-by-day increase in the performance of the new generation smartphone camera. Moreover, the connectivity and data processing offered by smartphones can be exploited to perform analysis directly at home with simple procedures. The system could eventually be used to monitor patient health and directly notify the physician of the analysis results allowing a decrease in costs and an increase in the healthcare availability and accessibility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Until few years ago, 3D modelling was a topic confined into a professional environment. Nowadays technological innovations, the 3D printer among all, have attracted novice users to this application field. This sudden breakthrough was not supported by adequate software solutions. The 3D editing tools currently available do not assist the non-expert user during the various stages of generation, interaction and manipulation of 3D virtual models. This is mainly due to the current paradigm that is largely supported by two-dimensional input/output devices and strongly affected by obvious geometrical constraints. We have identified three main phases that characterize the creation and management of 3D virtual models. We investigated these directions evaluating and simplifying the classic editing techniques in order to propose more natural and intuitive tools in a pure 3D modelling environment. In particular, we focused on freehand sketch-based modelling to create 3D virtual models, interaction and navigation in a 3D modelling environment and advanced editing tools for free-form deformation and objects composition. To pursuing these goals we wondered how new gesture-based interaction technologies can be successfully employed in a 3D modelling environments, how we could improve the depth perception and the interaction in 3D environments and which operations could be developed to simplify the classical virtual models editing paradigm. Our main aims were to propose a set of solutions with which a common user can realize an idea in a 3D virtual model, drawing in the air just as he would on paper. Moreover, we tried to use gestures and mid-air movements to explore and interact in 3D virtual environment, and we studied simple and effective 3D form transformations. The work was carried out adopting the discrete representation of the models, thanks to its intuitiveness, but especially because it is full of open challenges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis was carried out inside the ESA's ESEO mission and focus in the design of one of the secondary payloads carried on board the spacecraft: a GNSS receiver for orbit determination. The purpose of this project is to test the technology of the orbit determination in real time applications by using commercial components. The architecture of the receiver includes a custom part, the navigation computer, and a commercial part, the front-end, from Novatel, with COCOM limitation removed, and a GNSS antenna. This choice is motivated by the goal of demonstrating the correct operations in orbit, enabling a widespread use of this technology while lowering the cost and time of the device’s assembly. The commercial front-end performs GNSS signal acquisition, tracking and data demodulation and provides raw GNSS data to the custom computer. This computer processes this raw observables, that will be both transferred to the On-Board Computer and then transmitted to Earth and provided as input to the recursive estimation filter on-board, in order to obtain an accurate positioning of the spacecraft, using the dynamic model. The main purpose of this thesis, is the detailed design and development of the mentioned GNSS receiver up to the ESEO project Critical Design Review, including requirements definition, hardware design and breadboard preliminary test phase design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Graphene, the thinnest two-dimensional material possible, is considered as a realistic candidate for the numerous applications in electronic, energy storage and conversion devices due to its unique properties, such as high optical transmittance, high conductivity, excellent chemical and thermal stability. However, the electronic and chemical properties of graphene are highly dependent on their preparation methods. Therefore, the development of novel chemical exfoliation process which aims at high yield synthesis of high quality graphene while maintaining good solution processability is of great concern. This thesis focuses on the solution production of high-quality graphene by wet-chemical exfoliation methods and addresses the applications of the chemically exfoliated graphene in organic electronics and energy storage devices.rnPlatinum is the most commonly used catalysts for fuel cells but they suffered from sluggish electron transfer kinetics. On the other hand, heteroatom doped graphene is known to enhance not only electrical conductivity but also long term operation stability. In this regard, a simple synthetic method is developed for the nitrogen doped graphene (NG) preparation. Moreover, iron (Fe) can be incorporated into the synthetic process. As-prepared NG with and without Fe shows excellent catalytic activity and stability compared to that of Pt based catalysts.rnHigh electrical conductivity is one of the most important requirements for the application of graphene in electronic devices. Therefore, for the fabrication of electrically conductive graphene films, a novel methane plasma assisted reduction of GO is developed. The high electrical conductivity of plasma reduced GO films revealed an excellent electrochemical performance in terms of high power and energy densities when used as an electrode in the micro-supercapacitors.rnAlthough, GO can be prepared in bulk scale, large amount of defect density and low electrical conductivity are major drawbacks. To overcome the intrinsic limitation of poor quality of GO and/or reduced GO, a novel protocol is extablished for mass production of high-quality graphene by means of electrochemical exfoliation of graphite. The prepared graphene shows high electrical conductivity, low defect density and good solution processability. Furthermore, when used as electrodes in organic field-effect transistors and/or in supercapacitors, the electrochemically exfoliated graphene shows excellent device performances. The low cost and environment friendly production of such high-quality graphene is of great importance for future generation electronics and energy storage devices. rn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Failing cerebral blood flow (CBF) autoregulation may contribute to cerebral damage after traumatic brain injury (TBI). The purpose of this study was to describe the time course of CO(2)-dependent vasoreactivity, measured as CBF velocity in response to hyperventilation (vasomotor reactivity [VMR] index). We included 13 patients who had had severe TBI, 8 of whom received norepinephrine (NE) based on clinical indication. In these patients, measurements were also performed after dobutamine administration, with a goal of increasing cardiac output by 30%. Blood flow velocity was measured with transcranial Doppler ultrasound in both hemispheres. All patients except one had an abnormal VMR index in at least one hemisphere within the first 24 h after TBI. In those patients who did not receive catecholamines, mean VMR index recovered within the first 48 to 72 h. In contrast, in patients who received NE within the first 48 h period, VMR index did not recover on the second day. Cardiac output and mean CBF velocity increased significantly during dobutamine administration, but VMR index did not change significantly. In conclusion, CO(2) vasomotor reactivity was abnormal in the first 24 h after TBI in most of the patients, but recovered within 48 h in those patients who did not receive NE, in contrast to those eventually receiving the drug. Addition of dobutamine to NE had variable but overall insignificant effects on CO(2) vasomotor reactivity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SMARTDIAB is a platform designed to support the monitoring, management, and treatment of patients with type 1 diabetes mellitus (T1DM), by combining state-of-the-art approaches in the fields of database (DB) technologies, communications, simulation algorithms, and data mining. SMARTDIAB consists mainly of two units: 1) the patient unit (PU); and 2) the patient management unit (PMU), which communicate with each other for data exchange. The PMU can be accessed by the PU through the internet using devices, such as PCs/laptops with direct internet access or mobile phones via a Wi-Fi/General Packet Radio Service access network. The PU consists of an insulin pump for subcutaneous insulin infusion to the patient and a continuous glucose measurement system. The aforementioned devices running a user-friendly application gather patient's related information and transmit it to the PMU. The PMU consists of a diabetes data management system (DDMS), a decision support system (DSS) that provides risk assessment for long-term diabetes complications, and an insulin infusion advisory system (IIAS), which reside on a Web server. The DDMS can be accessed from both medical personnel and patients, with appropriate security access rights and front-end interfaces. The DDMS, apart from being used for data storage/retrieval, provides also advanced tools for the intelligent processing of the patient's data, supporting the physician in decision making, regarding the patient's treatment. The IIAS is used to close the loop between the insulin pump and the continuous glucose monitoring system, by providing the pump with the appropriate insulin infusion rate in order to keep the patient's glucose levels within predefined limits. The pilot version of the SMARTDIAB has already been implemented, while the platform's evaluation in clinical environment is being in progress.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ventricular assist devices (VADs) and total artificial hearts have been in development for the last 50 years. Since their inception, simulators of the circulation with different degrees of complexity have been produced to test these devices in vitro. Currently, a new path has been taken with the extensive efforts to develop paediatric VADs, which require totally different design constraints. This paper presents the manufacturing details of an economical simulator of the systemic paediatric circulation. This simulator allows the insertion of a paediatric VAD, includes a pumping ventricle, and is adjustable within the paediatric range. Rather than focusing on complexity and physiological simulation, this simulator is designed to be simple and practical for rapid device testing. The simulator was instrumented with medical sensors and data were acquired under different conditions with and without the new PediaFlowTM paediatric VAD. The VAD was run at different impeller speeds while simulator settings such as vascular resistance and stroke volume were varied. The hydraulic performance of the VAD under pulsatile conditions could be characterized and the magnetic suspension could be tested via manipulations such as cannula clamping. This compact mock loop has proven to be valuable throughout the PediaFlow development process and has the advantage that it is uncomplicated and can be manufactured cheaply. It can be produced by several research groups and the results of different VADs can then be compared easily.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With recent advances in mass spectrometry techniques, it is now possible to investigate proteins over a wide range of molecular weights in small biological specimens. This advance has generated data-analytic challenges in proteomics, similar to those created by microarray technologies in genetics, namely, discovery of "signature" protein profiles specific to each pathologic state (e.g., normal vs. cancer) or differential profiles between experimental conditions (e.g., treated by a drug of interest vs. untreated) from high-dimensional data. We propose a data analytic strategy for discovering protein biomarkers based on such high-dimensional mass-spectrometry data. A real biomarker-discovery project on prostate cancer is taken as a concrete example throughout the paper: the project aims to identify proteins in serum that distinguish cancer, benign hyperplasia, and normal states of prostate using the Surface Enhanced Laser Desorption/Ionization (SELDI) technology, a recently developed mass spectrometry technique. Our data analytic strategy takes properties of the SELDI mass-spectrometer into account: the SELDI output of a specimen contains about 48,000 (x, y) points where x is the protein mass divided by the number of charges introduced by ionization and y is the protein intensity of the corresponding mass per charge value, x, in that specimen. Given high coefficients of variation and other characteristics of protein intensity measures (y values), we reduce the measures of protein intensities to a set of binary variables that indicate peaks in the y-axis direction in the nearest neighborhoods of each mass per charge point in the x-axis direction. We then account for a shifting (measurement error) problem of the x-axis in SELDI output. After these pre-analysis processing of data, we combine the binary predictors to generate classification rules for cancer, benign hyperplasia, and normal states of prostate. Our approach is to apply the boosting algorithm to select binary predictors and construct a summary classifier. We empirically evaluate sensitivity and specificity of the resulting summary classifiers with a test dataset that is independent from the training dataset used to construct the summary classifiers. The proposed method performed nearly perfectly in distinguishing cancer and benign hyperplasia from normal. In the classification of cancer vs. benign hyperplasia, however, an appreciable proportion of the benign specimens were classified incorrectly as cancer. We discuss practical issues associated with our proposed approach to the analysis of SELDI output and its application in cancer biomarker discovery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: Adequacy of organ perfusion depends on sufficient oxygen supply in relation to the metabolic needs. The aim of this study was to evaluate the relationship between gradients of free energy change, and the more commonly used parameter for the evaluation of the adequacy of organ perfusion, such as oxygen-extraction in patients undergoing valve replacement surgery using normothermic cardiopulmonary bypass (CPB). METHODS: In 43 cardiac patients, arterial, mixed venous, and hepato-venous blood samples were taken synchronously after induction of anaesthesia (preCPB), during CPB, and 2 and 7 h after admission to the intensive care unit (ICU+2, ICU+7). Blood gas analysis, cardiac output, and hepato-splanchnic blood flow were measured. Free energy change gradients between mixed venous and arterial (-deltadeltaG(v - a)) and hepato-venous and arterial (-deltadeltaG(hv - a)) compartments were calculated. MEASUREMENTS AND RESULTS: Cardiac index (CI) increased from 1.9 (0.7) to 2.8 (1.3) L/min/m (median, inter-quartile range) (p = 0.001), and hepato-splanchnic blood flow index (HBFI) from 0.6 (0.22) to 0.8 (0.53) L/min/m (p = 0.001). Despite increasing flow, systemic oxygen extraction increased after CPB from 24 (10)% to 35 (10)% at ICU+2 (p = 0.002), and splanchnic oxygen extraction increased during CPB from 37 (19)% to 52 (14)% (p = 0.001), and remained high thereafter. After CPB, high splanchnic and systemic gradients of free energy change gradients were associated with high splanchnic and systemic oxygen extraction, respectively (p = 0.001, 0.033, respectively). CONCLUSION: Gradients of free energy change may be helpful in characterising adequacy of perfusion in cardiac surgery patients independently from measurements or calculations of data from oxygen transport.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a model of spike-driven synaptic plasticity inspired by experimental observations and motivated by the desire to build an electronic hardware device that can learn to classify complex stimuli in a semisupervised fashion. During training, patterns of activity are sequentially imposed on the input neurons, and an additional instructor signal drives the output neurons toward the desired activity. The network is made of integrate-and-fire neurons with constant leak and a floor. The synapses are bistable, and they are modified by the arrival of presynaptic spikes. The sign of the change is determined by both the depolarization and the state of a variable that integrates the postsynaptic action potentials. Following the training phase, the instructor signal is removed, and the output neurons are driven purely by the activity of the input neurons weighted by the plastic synapses. In the absence of stimulation, the synapses preserve their internal state indefinitely. Memories are also very robust to the disruptive action of spontaneous activity. A network of 2000 input neurons is shown to be able to classify correctly a large number (thousands) of highly overlapping patterns (300 classes of preprocessed Latex characters, 30 patterns per class, and a subset of the NIST characters data set) and to generalize with performances that are better than or comparable to those of artificial neural networks. Finally we show that the synaptic dynamics is compatible with many of the experimental observations on the induction of long-term modifications (spike-timing-dependent plasticity and its dependence on both the postsynaptic depolarization and the frequency of pre- and postsynaptic neurons).