922 resultados para Efficient error correction
Resumo:
Adsorbents functionalized with chelating agents are effective in removal of heavy metals from aqueous solutions. Important properties of such adsorbents are high binding affinity as well as regenerability. In this study, aminopolycarboxylic acid, EDTA and DTPA, were immobilized on the surface of silica gel, chitosan, and their hybrid materials to achieve chelating adsorbents for heavy metals such as Co(II), Ni(II), Cd(II), and Pb(II). New knowledge about the adsorption properties of EDTA- and DTPA-functionalizedadsorbents was obtained. Experimental work showed the effectiveness, regenerability, and stability of the studied adsorbents. Both advantages and disadvantages of the adsorbents were evaluated. For example, the EDTA-functionalized chitosan-silica hybrid materials combined the benefits of the silica gel and chitosan while at the same time diminishing their observed drawbacks. Modeling of adsorption kinetics and isotherms is an important step in design process. Therefore, several kinetic and isotherm models were introduced and applied in this work. Important aspects such as effect of error function, data range, initial guess values, and linearization were discussed and investigated. The selection of the most suitable model was conducted by comparing the experimental and simulated data as well as evaluating the correspondence between the theory behind the model and properties of the adsorbent. In addition, modeling of two-component data was conducted using various extended isotherms. Modeling results for both one- and twocomponent systems supported each other. Finally, application testing of EDTA- and DTPA-functionalized adsorbents was conducted. The most important result was the applicability of DTPA-functionalized silica gel and chitosan in the capturing of Co(II) from its aqueous EDTA-chelate. Moreover, these adsorbents were efficient in various solution matrices. In addition, separation of Ni(II) from Co(II) and Ni(II) and Pb(II) from Co(II) and Cd(II) was observed in two- and multimetal systems. Lastly, prior to their analysis, EDTA- and DTPA-functionalized silica gels were successfully used to preconcentrate metal ions from both pure and salty waters
Resumo:
Persistent luminescence materials can store energy from solar radiation or artificial lighting and release it over a period of several hours without a continuous excitation source. These materials are widely used to improve human safety in emergency and traffic signalization. They can also be utilized in novel applications including solar cells, medical diagnostics, radiation detectors and structural damage sensors. The development of these materials is currently based on methods based on trial and error. The tailoring of new materials is also hindered by the lack of knowledge on the role of their intrinsic and extrinsic lattice defects in the appropriate mechanisms. The goal of this work was to clarify the persistent luminescence mechanisms by combining ab initio density functional theory (DFT) calculations with selected experimental methods. The DFT approach enables a full control of both the nature of the defects and their locations in the host lattice. The materials studied in the present work, the distrontium magnesium disilicate (Sr2MgSi2O7) and strontium aluminate (SrAl2O4) are among the most efficient persistent luminescence hosts when doped with divalent europium Eu2+ and co-doped with trivalent rare earth ions R3+ (R: Y, La-Nd, Sm, Gd-Lu). The polycrystalline materials were prepared with the solid state method and their structural and phase purity was confirmed by X-ray powder diffraction. Their local crystal structure was studied by high-resolution transmission electron microscopy. The crystal and electronic structure of the nondoped as well as Eu2+, R2+/3+ and other defect containing materials were studied using DFT calculations. The experimental trap depths were obtained using thermoluminescence (TL) spectroscopy. The emission and excitation of Sr2MgSi2O7:Eu2+,Dy3+ were also studied. Significant modifications in the local crystal structure due to the Eu2+ ion and lattice defects were found by the experimental and DFT methods. The charge compensation effects induced by the R3+ co-doping further increased the number of defects and distortions in the host lattice. As for the electronic structure of Sr2MgSi2O7 and SrAl2O4, the experimental band gap energy of the host materials was well reproduced by the calculations. The DFT calculated Eu2+ and R2+/3+ 4fn as well as 4fn-15d1 ground states in the Sr2MgSi2O7 band structure provide an independent verification for an empirical model which is constructed using rather sparse experimental data for the R3+ and especially the R2+ ions. The intrinsic and defect induced electron traps were found to act together as energy storage sites contributing to the materials’ efficient persistent luminescence. The calculated trap energy range agreed with the trap structure of Sr2MgSi2O7 obtained using TL measurements. More experimental studies should be carried out for SrAl2O4 to compare with the DFT calculations. The calculated and experimental results show that the electron traps created by both the rare earth ions and vacancies are modified due to the defect aggregation and charge compensation effects. The relationships between this modification and the energy storage properties of the solid state materials are discussed.
Resumo:
Thermal and air conditions inside animal facilities change during the day due to the influence of the external environment. For statistical and geostatistical analyses to be representative, a large number of points spatially distributed in the facility area must be monitored. This work suggests that the time variation of environmental variables of interest for animal production, monitored within animal facility, can be modeled accurately from discrete-time records. The aim of this study was to develop a numerical method to correct the temporal variations of these environmental variables, transforming the data so that such observations are independent of the time spent during the measurement. The proposed method approached values recorded with time delays to those expected at the exact moment of interest, if the data were measured simultaneously at the moment at all points distributed spatially. The correction model for numerical environmental variables was validated for environmental air temperature parameter, and the values corrected by the method did not differ by Tukey's test at 5% significance of real values recorded by data loggers.
Resumo:
Den snart 200 år gamla vetenskapsgrenen organisk synteskemi har starkt bidragit till moderna samhällens välfärd. Ett av flaggskeppen för den organiska synteskemin är utvecklingen och produktionen av nya läkemedel och speciellt de aktiva substanserna däri. Därmed är det viktigt att utveckla nya syntesmetoder, som kan tillämpas vid framställningen av farmaceutiskt relevanta målstrukturer. I detta sammanhang är den ultimata målsättningen dock inte endast en lyckad syntes av målmolekylen, utan det är allt viktigare att utveckla syntesrutter som uppfyller kriterierna för den hållbara utvecklingen. Ett av de centralaste verktygen som en organisk kemist har till förfogande i detta sammanhang är katalys, eller mera specifikt möjligheten att tillämpa olika katalytiska reaktioner vid framställning av komplexa målstrukturer. De motsvarande industriella processerna karakteriseras av hög effektivitet och minimerad avfallsproduktion, vilket naturligtvis gynnar den kemiska industrin samtidigt som de negativa miljöeffekterna minskas avsevärt. I denna doktorsavhandling har nya syntesrutter för produktion av finkemikalier med farmaceutisk relevans utvecklats genom att kombinera förhållandevis enkla transformationer till nya reaktionssekvenser. Alla reaktionssekvenser som diskuteras i denna avhandling påbörjades med en metallförmedlad allylering av utvalda aldehyder eller aldiminer. De erhållna produkterna innehållende en kol-koldubbelbindning med en närliggande hydroxyl- eller aminogrupp modifierades sedan vidare genom att tillämpa välkända katalytiska reaktioner. Alla syntetiserade molekyler som presenteras i denna avhandling karakteriseras som finkemikalier med hög potential vid farmaceutiska tillämpningar. Utöver detta tillämpades en mängd olika katalytiska reaktioner framgångsrikt vid syntes av dessa molekyler, vilket i sin tur förstärker betydelsen för de katalytiska verktygen i organiska kemins verktygslåda.
Resumo:
Traumatic diaphragmatic hernia is defined as a laceration of the diaphragm with an abdominal viscera herniation into the thorax. It is usually asymptomatic, with the exception of the cases with obstruction, strangulation, necrosis or perforation of the herniaded viscera. It is classified as acute, latent or chronic, in accordance with the evolutive period. At the latent phase, symptoms are indefinite and the radiological signals, which are suggestive of thoracic affections, are frequent and can induce a diagnosis error, leading to inadequate treatment.This article presents a case of chronic traumatic diaphragmatic hernia which was complicated by a gastricpleuralcutaneous fistula, due to an inadequate thoracic drainage. Considering that this is a chronic affection with an unquestionable surgical indication, due to the complications risk, it is essential to have a detailed diagnostic investigation, which aims at both avoiding an intempestive or inadequate therapeutics behaviour and reducing the affection morbimortality. Recently, the videolaparoscopic approach has proved to be more precise when compared to the other diagnostic methods, by direct visualization of the diaphragmatic laceration, allowing its correction by an immediate suture.
Resumo:
ABSTRACTObjective:to assess the impact of the shift inlet trauma patients, who underwent surgery, in-hospital mortality.Methods:a retrospective observational cohort study from November 2011 to March 2012, with data collected through electronic medical records. The following variables were statistically analyzed: age, gender, city of origin, marital status, admission to the risk classification (based on the Manchester Protocol), degree of contamination, time / admission round, admission day and hospital outcome.Results:during the study period, 563 patients injured victims underwent surgery, with a mean age of 35.5 years (± 20.7), 422 (75%) were male, with 276 (49.9%) received in the night shift and 205 (36.4%) on weekends. Patients admitted at night and on weekends had higher mortality [19 (6.9%) vs. 6 (2.2%), p=0.014, and 11 (5.4%) vs. 14 (3.9%), p=0.014, respectively]. In the multivariate analysis, independent predictors of mortality were the night admission (OR 3.15), the red risk classification (OR 4.87), and age (OR 1.17).Conclusion:the admission of night shift and weekend patients was associated with more severe and presented higher mortality rate. Admission to the night shift was an independent factor of surgical mortality in trauma patients, along with the red risk classification and age.
Resumo:
Objective: To analyze the performance of two surgical meshes of different compositions during the defect healing process of the abdominal wall of rats. Methods: thirty-three adult Wistar rats were anesthetized and subjected to removal of an area of 1.5 cm x 2 cm of the anterior abdominal wall, except for the skin; 17 animals had the defect corrected by edge-to-edge surgical suture of a mesh made of polypropylene + poliglecaprone (Group U - UltraproTM); 16 animals had the defect corrected with a surgical mesh made of polypropylene + polidioxanone + cellulose (Group P - ProceedTM). Each group was divided into two subgroups, according to the euthanasia moment (seven days or 28 days after the operation). Parameters analyzed were macroscopic (adherence), microscopic (quantification of mature and immature collagen) and tensiometric (maximum tension and maximum rupture strength). Results : there was an increase in collagen type I in the ProceedTM group from seven to 28 days, p = 0.047. Also, there was an increase in the rupture tension on both groups when comparing the two periods. There was a lower rupture tension and tissue deformity with ProceedTM mesh in seven days, becoming equal at day 28. Conclusion : the meshes retain similarities in the final result and more studies with larger numbers of animals must be carried for better assessment.
Resumo:
Multiprocessing is a promising solution to meet the requirements of near future applications. To get full benefit from parallel processing, a manycore system needs efficient, on-chip communication architecture. Networkon- Chip (NoC) is a general purpose communication concept that offers highthroughput, reduced power consumption, and keeps complexity in check by a regular composition of basic building blocks. This thesis presents power efficient communication approaches for networked many-core systems. We address a range of issues being important for designing power-efficient manycore systems at two different levels: the network-level and the router-level. From the network-level point of view, exploiting state-of-the-art concepts such as Globally Asynchronous Locally Synchronous (GALS), Voltage/ Frequency Island (VFI), and 3D Networks-on-Chip approaches may be a solution to the excessive power consumption demanded by today’s and future many-core systems. To this end, a low-cost 3D NoC architecture, based on high-speed GALS-based vertical channels, is proposed to mitigate high peak temperatures, power densities, and area footprints of vertical interconnects in 3D ICs. To further exploit the beneficial feature of a negligible inter-layer distance of 3D ICs, we propose a novel hybridization scheme for inter-layer communication. In addition, an efficient adaptive routing algorithm is presented which enables congestion-aware and reliable communication for the hybridized NoC architecture. An integrated monitoring and management platform on top of this architecture is also developed in order to implement more scalable power optimization techniques. From the router-level perspective, four design styles for implementing power-efficient reconfigurable interfaces in VFI-based NoC systems are proposed. To enhance the utilization of virtual channel buffers and to manage their power consumption, a partial virtual channel sharing method for NoC routers is devised and implemented. Extensive experiments with synthetic and real benchmarks show significant power savings and mitigated hotspots with similar performance compared to latest NoC architectures. The thesis concludes that careful codesigned elements from different network levels enable considerable power savings for many-core systems.
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.
Resumo:
Purpose To evaluate the precision of both two- and three-dimensional ultrasonography in determining vertebral lesion level (the first open vertebra) in patients with spina bifida. Methods This was a prospective longitudinal study comprising of fetuses with open spina bifida who were treated in the fetal medicine division of the department of obstetrics of Hospital das Clínicas of the Universidade de São Paulo between 2004 and 2013. Vertebral lesion level was established by using both two- and three-dimensional ultrasonography in 50 fetuses (two examiners in each method). The lesion level in the neonatal period was established by radiological assessment of the spine. All pregnancies were followed in our hospital prenatally, and delivery was scheduled to allow immediate postnatal surgical correction. Results Two-dimensional sonography precisely estimated the spina bifida level in 53% of the cases. The estimate error was within one vertebra in 80% of the cases, in up to two vertebrae in 89%, and in up to three vertebrae in 100%, showing a good interobserver agreement. Three-dimensional ultrasonography precisely estimated the lesion level in 50% of the cases. The estimate error was within one vertebra in 82% of the cases, in up to two vertebrae in 90%, and in up to three vertebrae in 100%, also showing good interobserver agreement. Whenever an estimate error was observed, both two- and three-dimensional ultrasonography scans tended to underestimate the true lesion level (55.3% and 62% of the cases, respectively). Conclusions No relevant difference in diagnostic performance was observed between the two- and three-dimensional ultrasonography. The use of three-dimensional ultrasonography showed no additional benefit in diagnosing the lesion level in the fetuses with spina bifida. Errors in both methods showed a tendency to underestimate lesion level.
Resumo:
This study investigated the surface hardening of steels via experimental tests using a multi-kilowatt fiber laser as the laser source. The influence of laser power and laser power density on the hardening effect was investigated. The microhardness analysis of various laser hardened steels was done. A thermodynamic model was developed to evaluate the thermal process of the surface treatment of a wide thin steel plate with a Gaussian laser beam. The effect of laser linear oscillation hardening (LLOS) of steel was examined. An as-rolled ferritic-pearlitic steel and a tempered martensitic steel with 0.37 wt% C content were hardened under various laser power levels and laser power densities. The optimum power density that produced the maximum hardness was found to be dependent on the laser power. The effect of laser power density on the produced hardness was revealed. The surface hardness, hardened depth and required laser power density were compared between the samples. Fiber laser was briefly compared with high power diode laser in hardening medium-carbon steel. Microhardness (HV0.01) test was done on seven different laser hardened steels, including rolled steel, quenched and tempered steel, soft annealed alloyed steel and conventionally through-hardened steel consisting of different carbon and alloy contents. The surface hardness and hardened depth were compared among the samples. The effect of grain size on surface hardness of ferritic-pearlitic steel and pearlitic-cementite steel was evaluated. In-grain indentation was done to measure the hardness of pearlitic and cementite structures. The macrohardness of the base material was found to be related to the microhardness of the softer phase structure. The measured microhardness values were compared with the conventional macrohardness (HV5) results. A thermodynamic model was developed to calculate the temperature cycle, Ac1 and Ac3 boundaries, homogenization time and cooling rate. The equations were numerically solved with an error of less than 10-8. The temperature distributions for various thicknesses were compared under different laser traverse speed. The lag of the was verified by experiments done on six different steels. The calculated thermal cycle and hardened depth were compared with measured data. Correction coefficients were applied to the model for AISI 4340 steel. AISI 4340 steel was hardened by laser linear oscillation hardening (LLOS). Equations were derived to calculate the overlapped width of adjacent tracks and the number of overlapped scans in the center of the scanned track. The effect of oscillation frequency on the hardened depth was investigated by microscopic evaluation and hardness measurement. The homogeneity of hardness and hardened depth with different processing parameters were investigated. The hardness profiles were compared with the results obtained with conventional single-track hardening. LLOS was proved to be well suitable for surface hardening in a relatively large rectangular area with considerable depth of hardening. Compared with conventional single-track scanning, LLOS produced notably smaller hardened depths while at 40 and 100 Hz LLOS resulted in higher hardness within a depth of about 0.6 mm.
Resumo:
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.
Resumo:
The demand for more efficient manufacturing processes has been increasing in the last few years. The cold forging process is presented as a possible solution, because it allows the production of parts with a good surface finish and with good mechanical properties. Nevertheless, the cold forming sequence design is very empirical and it is based on the designer experience. The computational modeling of each forming process stage by the finite element method can make the sequence design faster and more efficient, decreasing the use of conventional "trial and error" methods. In this study, the application of a commercial general finite element software - ANSYS - has been applied to model a forming operation. Models have been developed to simulate the ring compression test and to simulate a basic forming operation (upsetting) that is applied in most of the cold forging parts sequences. The simulated upsetting operation is one stage of the automotive starter parts manufacturing process. Experiments have been done to obtain the stress-strain material curve, the material flow during the simulated stage, and the required forming force. These experiments provided results used as numerical model input data and as validation of model results. The comparison between experiments and numerical results confirms the developed methodology potential on die filling prediction.
Resumo:
This article deals with a contour error controller (CEC) applied in a high speed biaxial table. It works simultaneously with the table axes controllers, helping them. In the early stages of the investigation, it was observed that its main problem is imprecision when tracking non-linear contours at high speeds. The objectives of this work are to show that this problem is caused by the lack of exactness of the contour error mathematical model and to propose modifications in it. An additional term is included, resulting in a more accurate value of the contour error, enabling the use of this type of motion controller at higher feedrate. The response results from simulated and experimental tests are compared with those of common PID and non-corrected CEC in order to analyse the effectiveness of this controller over the system. The main conclusions are that the proposed contour error mathematical model is simple, accurate, almost insensible to the feedrate and that a 20:1 reduction of the integral absolute contour error is possible.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.