881 resultados para high repetition rate
Resumo:
Based on the mass balance equations of solute transfer in the radial chromatographic column, the theoretical expression to describe the column efficiency and shape of elution profile is obtained under linear isotherm case. Moreover, the tendency for the variation of column efficiency and symmetry of peak profile is systematically discussed. The results showed that in radial chromatography the relationship between the column efficiency and volumetric flow rate is similar with that relationship in axial chromatography; relatively high column efficiency still can be obtained under high flow rate in radial chromatography. Accompanying the increase of retention factor of solutes and injection time, the column efficiency decreases monotonously. The effect of column diameter and column length on the column efficiency interfere with each other. It is more advantageous to increase the column efficiency by applying columns with larger column diameter and shorter column length. According to the discussion of the effect of diffusion on the column efficiency, radial chromatography is proved to be suitable for the separation of samples with relatively high diffusion coefficient, which predicts its obvious advantage in the preparative separation of samples such as proteins and DNA.
Resumo:
A monolithic enzymatic microreactor was prepared in a fused-silica capillary by in situ polymerization of acrylamide, glycidyl methacrylate (GMA) and ethylene dimethacrylate (EDMA) in the presence of a binary porogenic mixture of dodecanol and cyclohexanol, followed by ammonia solution treatment, glutaraldehyde activation and trypsin modification. The choice of acrylamide as co-monomer was found useful to improve the efficiency of trypsin modification, thus, to increase the enzyme activity. The optimized microreactor offered very low back pressure, enabling the fast digestion of proteins flowing through the reactor. The performance of the monolithic microreactor was demonstrated with the digestion of cytochrome c at high flow rate. The digests were then characterized by CE and HPLC-MS/MS with the sequence coverage of 57.7%. The digestion efficiency was found over 230 times as high as that of the conventional method. in addition, for the first time, protein digestion carried out in a mixture of water and ACN was compared with the conventional aqueous reaction using MS/MS detection, and the former solution was found more compatible and more efficient for protein digestion.
Resumo:
K. Rasmani and Q. Shen. Modifying weighted fuzzy subsethood-based rule models with fuzzy quantifiers. Proceedings of the 13th International Conference on Fuzzy Systems, pages 1679-1684, 2004
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências Farmacêuticas
Resumo:
With the increasing demand for document transfer services such as the World Wide Web comes a need for better resource management to reduce the latency of documents in these systems. To address this need, we analyze the potential for document caching at the application level in document transfer services. We have collected traces of actual executions of Mosaic, reflecting over half a million user requests for WWW documents. Using those traces, we study the tradeoffs between caching at three levels in the system, and the potential for use of application-level information in the caching system. Our traces show that while a high hit rate in terms of URLs is achievable, a much lower hit rate is possible in terms of bytes, because most profitably-cached documents are small. We consider the performance of caching when applied at the level of individual user sessions, at the level of individual hosts, and at the level of a collection of hosts on a single LAN. We show that the performance gain achievable by caching at the session level (which is straightforward to implement) is nearly all of that achievable at the LAN level (where caching is more difficult to implement). However, when resource requirements are considered, LAN level caching becomes much more desirable, since it can achieve a given level of caching performance using a much smaller amount of cache space. Finally, we consider the use of organizational boundary information as an example of the potential for use of application-level information in caching. Our results suggest that distinguishing between documents produced locally and those produced remotely can provide useful leverage in designing caching policies, because of differences in the potential for sharing these two document types among multiple users.
Resumo:
Mode-locked semiconductor lasers are compact pulsed sources with ultra-narrow pulse widths and high repetition-rates. In order to use these sources in real applications, their performance needs to be optimised in several aspects, usually by external control. We experimentally investigate the behaviour of recently-developed quantum-dash mode-locked lasers (QDMLLs) emitting at 1.55 μm under external optical injection. Single-section and two-section lasers with different repetition frequencies and active-region structures are studied. Particularly, we are interested in a regime which the laser remains mode-locked and the individual modes are simultaneously phase-locked to the external laser. Injection-locked self-mode-locked lasers demonstrate tunable microwave generation at first or second harmonic of the free-running repetition frequency with sub-MHz RF linewidth. For two-section mode-locked lasers, using dual-mode optical injection (injection of two coherent CW lines), narrowing the RF linewidth close to that of the electrical source, narrowing the optical linewidths and reduction in the time-bandwidth product is achieved. Under optimised bias conditions of the slave laser, a repetition frequency tuning ratio >2% is achieved, a record for a monolithic semiconductor mode-locked laser. In addition, we demonstrate a novel all-optical stabilisation technique for mode-locked semiconductor lasers by combination of CW optical injection and optical feedback to simultaneously improve the time-bandwidth product and timing-jitter of the laser. This scheme does not need an RF source and no optical to electrical conversion is required and thus is ideal for photonic integration. Finally, an application of injection-locked mode-locked lasers is introduced in a multichannel phase-sensitive amplifier (PSA). We show that with dual-mode injection-locking, simultaneous phase-synchronisation of two channels to local pump sources is realised through one injection-locking stage. An experimental proof of concept is demonstrated for two 10 Gbps phase-encoded (DPSK) channels showing more than 7 dB phase-sensitive gain and less than 1 dB penalty of the receiver sensitivity.
Resumo:
Reflective modulators based on the combination of an electroabsorption modulator (EAM) and semiconductor optical amplifier (SOA) are attractive devices for applications in long reach carrier distributed passive optical networks (PONs) due to the gain provided by the SOA and the high speed and low chirp modulation of the EAM. Integrated R-EAM-SOAs have experimentally shown two unexpected and unintuitive characteristics which are not observed in a single pass transmission SOA: the clamping of the output power of the device around a maximum value and low patterning distortion despite the SOA being in a regime of gain saturation. In this thesis a detailed analysis is carried out using both experimental measurements and modelling in order to understand these phenomena. For the first time it is shown that both the internal loss between SOA and R-EAM and the SOA gain play an integral role in the behaviour of gain saturated R-EAM-SOAs. Internal loss and SOA gain are also optimised for use in a carrier distributed PONs in order to access both the positive effect of output power clamping, and hence upstream dynamic range reduction, combined with low patterning operation of the SOA Reflective concepts are also gaining interest for metro transport networks and short reach, high bit rate, inter-datacentre links. Moving the optical carrier generation away from the transmitter also has potential advantages for these applications as it avoids the need for cooled photonics being placed directly on hot router line-cards. A detailed analysis is carried out in this thesis on a novel colourless reflective duobinary modulator, which would enable wavelength flexibility in a power-efficient reflective metro node.
Resumo:
The original solution to the high failure rate of software development projects was the imposition of an engineering approach to software development, with processes aimed at providing a repeatable structure to maintain a consistency in the ‘production process’. Despite these attempts at addressing the crisis in software development, others have argued that the rigid processes of an engineering approach did not provide the solution. The Agile approach to software development strives to change how software is developed. It does this primarily by relying on empowered teams of developers who are trusted to manage the necessary tasks, and who accept that change is a necessary part of a development project. The use of, and interest in, Agile methods in software development projects has expanded greatly, yet this has been predominantly practitioner driven. There is a paucity of scientific research on Agile methods and how they are adopted and managed. This study aims at addressing this paucity by examining the adoption of Agile through a theoretical lens. The lens used in this research is that of double loop learning theory. The behaviours required in an Agile team are the same behaviours required in double loop learning; therefore, a transition to double loop learning is required for a successful Agile adoption. The theory of triple loop learning highlights that power factors (or power mechanisms in this research) can inhibit the attainment of double loop learning. This study identifies the negative behaviours - potential power mechanisms - that can inhibit the double loop learning inherent in an Agile adoption, to determine how the Agile processes and behaviours can create these power mechanisms, and how these power mechanisms impact on double loop learning and the Agile adoption. This is a critical realist study, which acknowledges that the real world is a complex one, hierarchically structured into layers. An a priori framework is created to represent these layers, which are categorised as: the Agile context, the power mechanisms, and double loop learning. The aim of the framework is to explain how the Agile processes and behaviours, through the teams of developers and project managers, can ultimately impact on the double loop learning behaviours required in an Agile adoption. Four case studies provide further refinement to the framework, with changes required due to observations which were often different to what existing literature would have predicted. The study concludes by explaining how the teams of developers, the individual developers, and the project managers, working with the Agile processes and required behaviours, can inhibit the double loop learning required in an Agile adoption. A solution is then proposed to mitigate these negative impacts. Additionally, two new research processes are introduced to add to the Information Systems research toolkit.
Resumo:
The outcomes for both (i) radiation therapy and (ii) preclinical small animal radio- biology studies are dependent on the delivery of a known quantity of radiation to a specific and intentional location. Adverse effects can result from these procedures if the dose to the target is too high or low, and can also result from an incorrect spatial distribution in which nearby normal healthy tissue can be undesirably damaged by poor radiation delivery techniques. Thus, in mice and humans alike, the spatial dose distributions from radiation sources should be well characterized in terms of the absolute dose quantity, and with pin-point accuracy. When dealing with the steep spatial dose gradients consequential to either (i) high dose rate (HDR) brachytherapy or (ii) within the small organs and tissue inhomogeneities of mice, obtaining accurate and highly precise dose results can be very challenging, considering commercially available radiation detection tools, such as ion chambers, are often too large for in-vivo use.
In this dissertation two tools are developed and applied for both clinical and preclinical radiation measurement. The first tool is a novel radiation detector for acquiring physical measurements, fabricated from an inorganic nano-crystalline scintillator that has been fixed on an optical fiber terminus. This dosimeter allows for the measurement of point doses to sub-millimeter resolution, and has the ability to be placed in-vivo in humans and small animals. Real-time data is displayed to the user to provide instant quality assurance and dose-rate information. The second tool utilizes an open source Monte Carlo particle transport code, and was applied for small animal dosimetry studies to calculate organ doses and recommend new techniques of dose prescription in mice, as well as to characterize dose to the murine bone marrow compartment with micron-scale resolution.
Hardware design changes were implemented to reduce the overall fiber diameter to <0.9 mm for the nano-crystalline scintillator based fiber optic detector (NanoFOD) system. Lower limits of device sensitivity were found to be approximately 0.05 cGy/s. Herein, this detector was demonstrated to perform quality assurance of clinical 192Ir HDR brachytherapy procedures, providing comparable dose measurements as thermo-luminescent dosimeters and accuracy within 20% of the treatment planning software (TPS) for 27 treatments conducted, with an inter-quartile range ratio to the TPS dose value of (1.02-0.94=0.08). After removing contaminant signals (Cerenkov and diode background), calibration of the detector enabled accurate dose measurements for vaginal applicator brachytherapy procedures. For 192Ir use, energy response changed by a factor of 2.25 over the SDD values of 3 to 9 cm; however a cap made of 0.2 mm thickness silver reduced energy dependence to a factor of 1.25 over the same SDD range, but had the consequence of reducing overall sensitivity by 33%.
For preclinical measurements, dose accuracy of the NanoFOD was within 1.3% of MOSFET measured dose values in a cylindrical mouse phantom at 225 kV for x-ray irradiation at angles of 0, 90, 180, and 270˝. The NanoFOD exhibited small changes in angular sensitivity, with a coefficient of variation (COV) of 3.6% at 120 kV and 1% at 225 kV. When the NanoFOD was placed alongside a MOSFET in the liver of a sacrificed mouse and treatment was delivered at 225 kV with 0.3 mm Cu filter, the dose difference was only 1.09% with use of the 4x4 cm collimator, and -0.03% with no collimation. Additionally, the NanoFOD utilized a scintillator of 11 µm thickness to measure small x-ray fields for microbeam radiation therapy (MRT) applications, and achieved 2.7% dose accuracy of the microbeam peak in comparison to radiochromic film. Modest differences between the full-width at half maximum measured lateral dimension of the MRT system were observed between the NanoFOD (420 µm) and radiochromic film (320 µm), but these differences have been explained mostly as an artifact due to the geometry used and volumetric effects in the scintillator material. Characterization of the energy dependence for the yttrium-oxide based scintillator material was performed in the range of 40-320 kV (2 mm Al filtration), and the maximum device sensitivity was achieved at 100 kV. Tissue maximum ratio data measurements were carried out on a small animal x-ray irradiator system at 320 kV and demonstrated an average difference of 0.9% as compared to a MOSFET dosimeter in the range of 2.5 to 33 cm depth in tissue equivalent plastic blocks. Irradiation of the NanoFOD fiber and scintillator material on a 137Cs gamma irradiator to 1600 Gy did not produce any measurable change in light output, suggesting that the NanoFOD system may be re-used without the need for replacement or recalibration over its lifetime.
For small animal irradiator systems, researchers can deliver a given dose to a target organ by controlling exposure time. Currently, researchers calculate this exposure time by dividing the total dose that they wish to deliver by a single provided dose rate value. This method is independent of the target organ. Studies conducted here used Monte Carlo particle transport codes to justify a new method of dose prescription in mice, that considers organ specific doses. Monte Carlo simulations were performed in the Geant4 Application for Tomographic Emission (GATE) toolkit using a MOBY mouse whole-body phantom. The non-homogeneous phantom was comprised of 256x256x800 voxels of size 0.145x0.145x0.145 mm3. Differences of up to 20-30% in dose to soft-tissue target organs was demonstrated, and methods for alleviating these errors were suggested during whole body radiation of mice by utilizing organ specific and x-ray tube filter specific dose rates for all irradiations.
Monte Carlo analysis was used on 1 µm resolution CT images of a mouse femur and a mouse vertebra to calculate the dose gradients within the bone marrow (BM) compartment of mice based on different radiation beam qualities relevant to x-ray and isotope type irradiators. Results and findings indicated that soft x-ray beams (160 kV at 0.62 mm Cu HVL and 320 kV at 1 mm Cu HVL) lead to substantially higher dose to BM within close proximity to mineral bone (within about 60 µm) as compared to hard x-ray beams (320 kV at 4 mm Cu HVL) and isotope based gamma irradiators (137Cs). The average dose increases to the BM in the vertebra for these four aforementioned radiation beam qualities were found to be 31%, 17%, 8%, and 1%, respectively. Both in-vitro and in-vivo experimental studies confirmed these simulation results, demonstrating that the 320 kV, 1 mm Cu HVL beam caused statistically significant increased killing to the BM cells at 6 Gy dose levels in comparison to both the 320 kV, 4 mm Cu HVL and the 662 keV, 137Cs beams.
Resumo:
This paper presents the challenges encountered in modelling biofluids in microchannels. In particular blood separation implemented in a T-microchannel device is analysed. Microfluids behave different from the counterparts in the microscale and a different approach has been adopted here to model them, which emphasize the roles of viscous forces, high shear rate performance and particle interaction in microscope. A T-microchannel design is numerically analysed by means of computational fluid dynamics (CFD) to investigate the effectiveness of blood separation based on the bifurcation law and other bio-physical effects. The simulation shows that the device can separate blood cells from plasma.
Resumo:
The stencil printing process is an important process in the assembly of Surface Mount Technology (SMT)devices. There is a wide agreement in the industry that the paste printing process accounts for the majority of assembly defects. Experience with this process has shown that typically over 60% of all soldering defects are due to problems associated with the flow properties of solder pastes. Therefore, the rheological measurements can be used as a tool to study the deformation or flow experienced by the pastes during the stencil printing process. This paper presents results on the thixotropic behaviour of three pastes; lead-based solder paste, lead-free solder paste and isotropic conductive adhesive (ICA). These materials are widely used as interconnect medium in the electronics industry. Solder paste are metal alloys suspended in a flux medium while the ICAs consist of silver flakes dispersed in an epoxy resin. The thixotropy behaviour was investigated through two rheological test; (i) hysteresis loop test and (ii) steady shear rate test. In the hysteresis loop test, the shear rate were increased from 0.001 to 100s-1 and then decreased from 100 to 0.001s-1. Meanwhile, in the steady shear rate test, the materials were subjected to a constant shear rate of 0.100, 100 and 0.001s-1 for a period of 240 seconds. All the pastes showed a high degree of shear thinning behaviour with time. This might be due to the agglomeration of particles in the flux or epoxy resin that prohibits pastes flow under low shear rate. The action of high shear rate would break the agglomerates into smaller pieces which facilitates the flow of pastes, thus viscosity is reduced at high shear rate. The solder pastes exhibited a higher degree of structural breakdown compared to the ICAs. The area between the up curve and down curve in the hysteresis curve is an indication of the thixotropic behavior of the pastes. Among the three pastes, lead-free solder paste showed the largest area between the down curve and up curve, which indicating a larger structural breakdown in the pastes, followed by lead-based solder paste and ICA. In a steady shear rate test, viscosity of ICA showed the best recovery with the steeper curve to its original viscosity after the removal of shear, which indicating that the dispersion quality in ICA is good because the high shear has little effect on the microstructure of ICA. In contrast, lead-based paste showed the poorest recovery which means this paste undergo larger structural breakdown and dispersion quality in this paste is poor because the microstructure of the paste is easily disrupted by high shear. The structural breakdown during the application of shear and the recovery after removal of shear is an important characteristic in the paste printing process. If the paste’s viscosity can drop low enough, it may contribute to the aperture filling and quick recovery may prevent slumping.
Resumo:
Thermosetting polymer materials are widely utilised in modern microelectronics packaging technology. These materials are used for a number of functions, such as for device bonding, for structural support applications and for physical protection of semiconductor dies. Typically, convection heating systems are used to raise the temperature of the materials to expedite the polymerisation process. The convection cure process has a number of drawbacks including process durations generally in excess of 1 hour and the requirement to heat the entire printed circuit board assembly, inducing thermomechanical stresses which effect device reliability. Microwave energy is able to raise the temperature of materials in a rapid, controlled manner. As the microwave energy penetrates into the polymer materials, the heating can be considered volumetric – i.e. the rate of heating is approximately constant throughout the material. This enables a maximal heating rate far greater than is available with convection oven systems which only raise the surface temperature of the polymer material and rely on thermal conductivity to transfer heat energy into the bulk. The high heating rate, combined with the ability to vary the operating power of the microwave system, enables the extremely rapid cure processes. Microwave curing of a commercially available encapsulation material has been studied experimentally and through use of numerical modelling techniques. The material assessed is Henkel EO-1080, a single component thermosetting epoxy. The producer has suggested three typical convection oven cure options for EO1080: 20 min at 150C or 90 min at 140C or 120 min at 110C. Rapid curing of materials of this type using advanced microwave systems, such as the FAMOBS system [1], is of great interest to microelectronics system manufacturers as it has the potential to reduce manufacturing costs, increase device reliability and enables new device designs. Experimental analysis has demonstrated that, in a realistic chip-on-board encapsulation scenario, the polymer material can be fully cured in approximately one minute. This corresponds to a reduction in cure time of approximately 95 percent relative to the convection oven process. Numerical assessment of the process [2] also suggests that cure times of approximately 70 seconds are feasible whilst indicating that the decrease in process duration comes at the expense of variation in degree of cure within the polymer.
Resumo:
The purpose of this article is to give a report about a research related with the conditions of inclusion of students with disability in a Chilean university. This research is a quantitative, descriptive and cross-sectional study. To collect the data required, a survey was developed, which was applied to 38 students with disability. The main results reveal a high retention rate of students, who exhibit a positive perception of their inclusion in their university life and also a high level of satisfaction with most of the services provided. Seven out of ten students surveyed recognize having received some sort of education care from their programs to pursue their studies. However, there still exists a lack of connection between the current initiatives developed at the university to support the enrollment and permanence of students. Added to this fact, there is a lack of protocols and training for teachers and staff. In this study it is proposed that the university must establish a management system that defines objectives, strategies and actions that contribute to improve inclusion of people with disabilities.
Resumo:
We argue the results published by Bao-Quan Ai et al [Phys. Rev E 67, 022903 (2003)] on "correlated noise in a logistic growth model " are not correct. Their conclusion that for larger values of the correlation parameter, lambda, the cell population is peaked at x=0, which denotes the high extinction rate is also incorrect. We find the reverse behaviour corresponding to their results, that increasing lambda, promotes the stable growth of tumour cells. In particular, their results for steady-state probability, as a function of cell number, at different correlation strengths, presented in figures 1 and 2 show different behaviour than one would expect from the simple mathematical expression for the steady-state probability. Additionally, their interpretation at small values of cell number that the steady state probability increases as they increase the correlation parameter is also questionable. Another striking feature in their figures (1 and 3) is that for the same values of the parameter lambda and alpha, their simulation produces two different curves both qualitatively and quantitatively.
Resumo:
Neutral gas depletion mechanisms are investigated in a dense low-temperature argon plasma-an inductively coupled magnetic neutral loop (NL) discharge. Gas temperatures are deduced from the Doppler profile of the 772.38 nm line absorbed by argon metastable atoms. Electron density and temperature measurements reveal that at pressures below 0.1 Pa, relatively high degrees of ionization (exceeding 1%) result in electron pressures, p(e) = kT(e)n(e), exceeding the neutral gas pressure. In this regime, neutral dynamics has to be taken into account and depletion through comparatively high ionization rates becomes important. This additional depletion mechanism can be spatially separated due to non-uniform electron temperature and density profiles (non-uniform ionization rate), while the gas temperature is rather uniform within the discharge region. Spatial profiles of the depletion of metastable argon atoms in the NL region are observed by laser induced fluorescence spectroscopy. In this region, the depletion of ground state argon atoms is expected to be even more pronounced since in the investigated high electron density regime the ratio of metastable and ground state argon atom densities is governed by the electron temperature, which peaks in the NL region. This neutral gas depletion is attributed to a high ionization rate in the NL zone and fast ion loss through ambipolar diffusion along the magnetic field lines. This is totally different from what is observed at pressures above 10 Pa where the degree of ionization is relatively low (