959 resultados para Readability Formulas
Resumo:
The goal set for this work was to synthesize and to characterize new iron and copper complexes with the Schiff base 3-MeOsalen and ligands of biological relevance, whose formulas are [Fe(3-MeOsalen)NO2], [Fe(3-MeOsalen)(etil2-dtc)], [Fe(3-MeOsalen)NO] and Na[Cu(3-MeOsalen)NO2]. The compounds were characterized by vibrational spectroscopy in the infrared region (IV) and Electronic spectroscopy in the ultraviolet and visible region (Uv-Vis). From the analysis of infrared spectra, they proved to formation of precursor complexes, as evidenced by changes in the vibrationals frequencies ν(C=N) e ν(C-O) and the emergence of vibrationals modes metal-oxygen and metal-nitrogen. For nitro complexes of iron and copper were observed ν(NO2)ass around 1300 cm-1 e ν(NO2)sim in 1271 cm-1 , indicating that the coordination is done via the nitrogen atom. The complex spectrum [Fe(3-MeOsalen)(etil2-dtc)] exhibited two bands, the ν(C-NR2) in 1508 cm-1 e ν(C-S) in 997 cm-1 , the relevant vibrational modes of coordinating ligand in the bidentate form. For the complex [Fe(3-MeOsalen)NO] was observed a new intense band in 1670 cm-1 related to the ν(NO). With the electronic spectra, the formation of complexes was evidenced by shifts of bands intraligands transitions and the emergence of new bands such as LMCT (p Cl- d* Fe3+) in [Fe(3-MeOsalen)Cl] and the d-d in [Cu(3-MeOsalen)H2O]. As for the [Fe(3-MeOsalen)NO2] has highlighted the absence of LMCT band present in the precursor complex as for the [Cu(3-MeOsalen)NO2] found that the displacement of the band hipsocrômico d-d on 28 nm. The electronic spectrum of [Fe(3-MeOsalen)(etil2-dtc)] presented LMCT band shifts and changes in intraligantes transitions. With regard to [Fe(3-MeOsalen)NO], revealed a more energetic transitions intraligands regions from the strong character π receiver NO and MLCT band of transition dπFe(II)π*(NO).
Resumo:
Event-B is a formal method for modeling and verification of discrete transition systems. Event-B development yields proof obligations that must be verified (i.e. proved valid) in order to keep the produced models consistent. Satisfiability Modulo Theory solvers are automated theorem provers used to verify the satisfiability of logic formulas considering a background theory (or combination of theories). SMT solvers not only handle large firstorder formulas, but can also generate models and proofs, as well as identify unsatisfiable subsets of hypotheses (unsat-cores). Tool support for Event-B is provided by the Rodin platform: an extensible Eclipse based IDE that combines modeling and proving features. A SMT plug-in for Rodin has been developed intending to integrate alternative, efficient verification techniques to the platform. We implemented a series of complements to the SMT solver plug-in for Rodin, namely improvements to the user interface for when proof obligations are reported as invalid by the plug-in. Additionally, we modified some of the plug-in features, such as support for proof generation and unsat-core extraction, to comply with the SMT-LIB standard for SMT solvers. We undertook tests using applicable proof obligations to demonstrate the new features. The contributions described can potentially affect productivity in a positive manner.
Resumo:
This work shows that the synthesis by combustion is a prominent alternative to obtain ceramic powders of higher oxides, nanostructured and of high purity, as the ferrites of formulas Co(1-x)Zn(x)Fe2O4 e Ni(1-x)Zn(x)Fe2O4 with x ranging from 0.2 mols, in a range from 0.2 ≤ x ≥ 1.0 mol, that presents magnetic properties in coexistence of ferroelectric and ferrimagnetic states, which can be used in antennas of micro tapes and selective surfaces of low frequency in a range of miniaturized microwaves, without performance loss. The obtainment occurred through the combustion process, followed by appropriate physical processes and ordered to the utilization of the substrate sinterization process, it gave us a ceramic material, of high purity degree in a nanometric scale. The Vibrating Sample Magnetometer (VSM) analysis showed that those ferritic materials presents parameters, as materials hysteresis, that have own behavior of magnetic materials of good quality, in which the magnetization states can be suddenly changed with a relatively small variation of the field intensity, having large applications on the electronics field. The X-ray Diffraction (XRD) analysis of the ceramic powders synthesized at 900 °C, characterize its structural and geometrical properties, the crystallite size and the interplanar spacing. Other analysis were developed, as Scanning Electron Microscopy (SEM), X-ray Fluorescence (XRF), electric permittivity and the tangent loss, in high frequencies, through the equipment ZVB - 14 Vector Network Analyzer 10 MHz-14 GHz, of ROHDE & SCHWART.
Resumo:
This thesis investigates the numerical modelling of Dynamic Position (DP) in pack ice. A two-dimensional numerical model for ship-ice interaction was developed using the Discrete Element Method (DEM). A viscous-elastic ice rheology was adopted to model the dynamic behaviour of the ice floes. Both the ship-ice and the ice-ice contacts were considered in the interaction force. The environment forces and the hydrodynamic forces were calculated by empirical formulas. After the current position and external forces were calculated, a Proportional-Integral-Derivative (PID) control and thrust allocation algorithms were applied on the vessel to control its motion and heading. The numerical model was coded in Fortran 90 and validated by comparing computation results to published data. Validation work was first carried out for the ship-ice interaction calculation, and former researchers’ simulation and model test results were used for the comparison. With confidence in the interaction model, case studies were conducted to predict the DP capability of a sample Arctic DP vessel.
Resumo:
The chemical structure of refractory marine dissolved organic matter (DOM) is still largely unknown. Electrospray ionization Fourier transform ion cyclotron resonance mass spectrometry (ESI FT-ICR-MS) was used to resolve the complex mixtures of DOM and provide valuable information on elemental compositions on a molecular scale. We characterized and compared DOM from two sharply contrasting aquatic environments, algal-derived DOM from the Weddell Sea (Antarctica) and terrigenous DOM from pore water of a tropical mangrove area in northern Brazil. Several thousand molecular formulas in the mass range of 300-600 Da were identified and reproduced in element ratio plots. On the basis of molecular elemental composition and double-bond equivalents (DBE) we calculated an average composition for marine DOM. O/C ratios in the marine samples were lower (0.36 ± 0.01) than in the mangrove pore-water sample (0.42). A small proportion of chemical formulas with higher molecular mass in the marine samples were characterized by very low O/C and H/C ratios probably reflecting amphiphilic properties. The average number of unsaturations in the marine samples was surprisingly high (DBE = 9.9; mangrove pore water: DBE = 9.4) most likely due to a significant contribution of carbonyl carbon. There was no significant difference in elemental composition between surface and deep-water DOM in the Weddell Sea. Although there were some molecules with unique marine elemental composition, there was a conspicuous degree of similarity between the terrigenous and algal-derived end members. Approximately one third of the molecular formulas were present in all marine as well as in the mangrove samples. We infer that different forms of microbial degradation ultimately lead to similar structural features that are intrinsically refractory, independent of the source of the organic matter and the environmental conditions where degradation took place.
Resumo:
A versão impressa está dividida em volume 1 e 2.
Resumo:
We argue that considering transitions at the same level as states, as first-class citizens, is advantageous in many cases. Namely, the use of atomic propositions on transitions, as well as on states, allows temporal formulas and strategies to be more powerful, general, and meaningful. We define egalitarian structures and logics, and show how they generalize well-known state-based, event-based, and mixed ones. We present translations from egalitarian to non-egalitarian settings that, in particular, allow the model checking of LTLR formulas using Maude’s LTL model checker. We have implemented these translations as a prototype in Maude itself.
Resumo:
In this study, we developed and improved the numerical mode matching (NMM) method which has previously been shown to be a fast and robust semi-analytical solver to investigate the propagation of electromagnetic (EM) waves in an isotropic layered medium. The applicable models, such as cylindrical waveguide, optical fiber, and borehole with earth geological formation, are generally modeled as an axisymmetric structure which is an orthogonal-plano-cylindrically layered (OPCL) medium consisting of materials stratified planarly and layered concentrically in the orthogonal directions.
In this report, several important improvements have been made to extend applications of this efficient solver to the anisotropic OCPL medium. The formulas for anisotropic media with three different diagonal elements in the cylindrical coordinate system are deduced to expand its application to more general materials. The perfectly matched layer (PML) is incorporated along the radial direction as an absorbing boundary condition (ABC) to make the NMM method more accurate and efficient for wave diffusion problems in unbounded media and applicable to scattering problems with lossless media. We manipulate the weak form of Maxwell's equations and impose the correct boundary conditions at the cylindrical axis to solve the singularity problem which is ignored by all previous researchers. The spectral element method (SEM) is introduced to more efficiently compute the eigenmodes of higher accuracy with less unknowns, achieving a faster mode matching procedure between different horizontal layers. We also prove the relationship of the field between opposite mode indices for different types of excitations, which can reduce the computational time by half. The formulas for computing EM fields excited by an electric or magnetic dipole located at any position with an arbitrary orientation are deduced. And the excitation are generalized to line and surface current sources which can extend the application of NMM to the simulations of controlled source electromagnetic techniques. Numerical simulations have demonstrated the efficiency and accuracy of this method.
Finally, the improved numerical mode matching (NMM) method is introduced to efficiently compute the electromagnetic response of the induction tool from orthogonal transverse hydraulic fractures in open or cased boreholes in hydrocarbon exploration. The hydraulic fracture is modeled as a slim circular disk which is symmetric with respect to the borehole axis and filled with electrically conductive or magnetic proppant. The NMM solver is first validated by comparing the normalized secondary field with experimental measurements and a commercial software. Then we analyze quantitatively the induction response sensitivity of the fracture with different parameters, such as length, conductivity and permeability of the filled proppant, to evaluate the effectiveness of the induction logging tool for fracture detection and mapping. Casings with different thicknesses, conductivities and permeabilities are modeled together with the fractures in boreholes to investigate their effects for fracture detection. It reveals that the normalized secondary field will not be weakened at low frequencies, ensuring the induction tool is still applicable for fracture detection, though the attenuation of electromagnetic field through the casing is significant. A hybrid approach combining the NMM method and BCGS-FFT solver based integral equation has been proposed to efficiently simulate the open or cased borehole with tilted fractures which is a non-axisymmetric model.
Resumo:
This dissertation shows the use of Constructal law to find the relation between the morphing of the system configuration and the improvements in the global performance of the complex flow system. It shows that the better features of both flow and heat transfer architecture can be found and predicted by using the constructal law in energy systems. Chapter 2 shows the effect of flow configuration on the heat transfer performance of a spiral shaped pipe embedded in a cylindrical conducting volume. Several configurations were considered. The optimal spacings between the spiral turns and spire planes exist, such that the volumetric heat transfer rate is maximal. The optimized features of the heat transfer architecture are robust. Chapter 3 shows the heat transfer performance of a helically shaped pipe embedded in a cylindrical conducting volume. It shows that the optimized features of the heat transfer architecture are robust with respect to changes in several physical parameters. Chapter 4 reports analytically the formulas for effective permeability in several configurations of fissured systems, using the closed-form description of tree networks designed to provide flow access. The permeability formulas do not vary much from one tree design to the next, suggesting that similar formulas may apply to naturally fissured porous media with unknown precise details, which occur in natural reservoirs. Chapter 5 illustrates a counterflow heat exchanger consists of two plenums with a core. The results show that the overall flow and thermal resistance are lowest when the core is absent. Overall, the constructal design governs the evolution of flow configuration in nature and energy systems.
Resumo:
While it is well known that exposure to radiation can result in cataract formation, questions still remain about the presence of a dose threshold in radiation cataractogenesis. Since the exposure history from diagnostic CT exams is well documented in a patient’s medical record, the population of patients chronically exposed to radiation from head CT exams may be an interesting area to explore for further research in this area. However, there are some challenges in estimating lens dose from head CT exams. An accurate lens dosimetry model would have to account for differences in imaging protocols, differences in head size, and the use of any dose reduction methods.
The overall objective of this dissertation was to develop a comprehensive method to estimate radiation dose to the lens of the eye for patients receiving CT scans of the head. This research is comprised of a physics component, in which a lens dosimetry model was derived for head CT, and a clinical component, which involved the application of that dosimetry model to patient data.
The physics component includes experiments related to the physical measurement of the radiation dose to the lens by various types of dosimeters placed within anthropomorphic phantoms. These dosimeters include high-sensitivity MOSFETs, TLDs, and radiochromic film. The six anthropomorphic phantoms used in these experiments range in age from newborn to adult.
First, the lens dose from five clinically relevant head CT protocols was measured in the anthropomorphic phantoms with MOSFET dosimeters on two state-of-the-art CT scanners. The volume CT dose index (CTDIvol), which is a standard CT output index, was compared to the measured lens doses. Phantom age-specific CTDIvol-to-lens dose conversion factors were derived using linear regression analysis. Since head size can vary among individuals of the same age, a method was derived to estimate the CTDIvol-to-lens dose conversion factor using the effective head diameter. These conversion factors were derived for each scanner individually, but also were derived with the combined data from the two scanners as a means to investigate the feasibility of a scanner-independent method. Using the scanner-independent method to derive the CTDIvol-to-lens dose conversion factor from the effective head diameter, most of the fitted lens dose values fell within 10-15% of the measured values from the phantom study, suggesting that this is a fairly accurate method of estimating lens dose from the CTDIvol with knowledge of the patient’s head size.
Second, the dose reduction potential of organ-based tube current modulation (OB-TCM) and its effect on the CTDIvol-to-lens dose estimation method was investigated. The lens dose was measured with MOSFET dosimeters placed within the same six anthropomorphic phantoms. The phantoms were scanned with the five clinical head CT protocols with OB-TCM enabled on the one scanner model at our institution equipped with this software. The average decrease in lens dose with OB-TCM ranged from 13.5 to 26.0%. Using the size-specific method to derive the CTDIvol-to-lens dose conversion factor from the effective head diameter for protocols with OB-TCM, the majority of the fitted lens dose values fell within 15-18% of the measured values from the phantom study.
Third, the effect of gantry angulation on lens dose was investigated by measuring the lens dose with TLDs placed within the six anthropomorphic phantoms. The 2-dimensional spatial distribution of dose within the areas of the phantoms containing the orbit was measured with radiochromic film. A method was derived to determine the CTDIvol-to-lens dose conversion factor based upon distance from the primary beam scan range to the lens. The average dose to the lens region decreased substantially for almost all the phantoms (ranging from 67 to 92%) when the orbit was exposed to scattered radiation compared to the primary beam. The effectiveness of this method to reduce lens dose is highly dependent upon the shape and size of the head, which influences whether or not the angled scan range coverage can include the entire brain volume and still avoid the orbit.
The clinical component of this dissertation involved performing retrospective patient studies in the pediatric and adult populations, and reconstructing the lens doses from head CT examinations with the methods derived in the physics component. The cumulative lens doses in the patients selected for the retrospective study ranged from 40 to 1020 mGy in the pediatric group, and 53 to 2900 mGy in the adult group.
This dissertation represents a comprehensive approach to lens of the eye dosimetry in CT imaging of the head. The collected data and derived formulas can be used in future studies on radiation-induced cataracts from repeated CT imaging of the head. Additionally, it can be used in the areas of personalized patient dose management, and protocol optimization and clinician training.
Resumo:
Aims: 1. To investigate the reliability and readability of information on the Internet on adult orthodontics. 2. To evaluate the profile and treatment of adults by specialist orthodontists in the Republic of Ireland (ROI). Materials and methods: 1. An Internet search was conducted in May 2015 using three search engines (Google, Yahoo and Bing), with two search terms (“adult orthodontics” and “adult braces”). The first 50 websites from each engine were screened and exclusion criteria applied. Included websites were then assessed for reliability using the JAMA benchmarks, the DISCERN and LIDA tools and the presence of the HON seal. Readability was assessed using the FRES. 2. A pilot-tested questionnaire about adult orthodontics was distributed to 122 eligible specialist orthodontists in the ROI. Questions addressed general and treatment information about adult orthodontic patients, methods of information provision and respondent demographics. Results: 1. Thirteen websites met the inclusion criteria. Three websites contained all JAMA benchmarks and one displayed the HON Seal. The mean overall score for DISCERN was 3.9/5 and the mean total LIDA score was 115/120. The average FRES score was 63.1. 2. The questionnaire yielded a response rate of 83%. The typical demographic profile of adult orthodontic patients was professional females between 25-35 years. The most common incisor relationship and skeletal base was Class II, division 1 (51%) and Class II (61%) respectively. Aesthetic upper brackets and metal lower brackets were the most frequently used appliances. Only 30% of orthodontists advise their adult patients to find extra information on the Internet. Conclusions: 1. The reliability and readability of information on the Internet on adult orthodontics is of moderate quality. 2. The provision of adult orthodontic treatment is common among specialist orthodontists in the Republic of Ireland.
Resumo:
Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. There are two issues in using HLPNs - modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.
Resumo:
Lower jaws (containing the teeth), eyes, and skin samples were collected from harp seals (Pagophilus groenlandicus) in the southeastern Barents Sea for the purpose of comparing age estimates obtained by 3 different methods, the traditional technique of counting growth layer groups (GLGs) in teeth and 2 novel approaches, aspartic acid racemization (AAR) in eye lens nuclei and telomere sequence analyses as a proxy for telomere length. A significant correlation between age estimates obtained using GLGs and AAR was found, whereas no correlation was found between GLGs and telomere length. An AAR rate (k Asp) of 0.00130/year ± 0.00005 SE and a D-enantiomer to L-enantiomer ratio at birth (D/L 0 value) of 0.01933 ± 0.00048 SE were estimated by regression of D/L ratios against GLG ages from 25 animals (12 selected teeth that had high readability and 13 known-aged animals). AAR could prove to be useful, particularly for ageing older animals in species such as harp seals where difficulties in counting GLGs tend to increase with age. Age estimation by telomere length did not show any correlation with GLG ages and is not recommended for harp seals.
Resumo:
Twenty-one core samples from DSDP/IPOD Leg 63 were analyzed for products of chlorophyll diagenesis. In addition to the tetrapyrrole pigments, perylene and carotenoid pigments were isolated and identified. The 16 core samples from the San Miguel Gap site (467) and the five from the Baja California borderland location (471) afforded the unique opportunity of examining tetrapyrrole diagenesis in clay-rich marine sediments that are very high in total organic matter. The chelation reaction, whereby free-base porphyrins give rise to metalloporphyrins (viz., nickel), is well documented within the downhole sequence of sediments from the San Miguel Gap (Site 467). Recognition of unique arrays of highly dealkylated copper and nickel ETIO-porphyrins, exhibiting nearly identical carbonnumber homologies (viz., C-23 to C-30; mode = C-26), enabled subtraction of this component (thought to be derived from an allochthonous source) and thus permitted description of the actual in situ diagenesis of autochthonous chlorophyll derivatives.
Resumo:
Quantile regression (QR) was first introduced by Roger Koenker and Gilbert Bassett in 1978. It is robust to outliers which affect least squares estimator on a large scale in linear regression. Instead of modeling mean of the response, QR provides an alternative way to model the relationship between quantiles of the response and covariates. Therefore, QR can be widely used to solve problems in econometrics, environmental sciences and health sciences. Sample size is an important factor in the planning stage of experimental design and observational studies. In ordinary linear regression, sample size may be determined based on either precision analysis or power analysis with closed form formulas. There are also methods that calculate sample size based on precision analysis for QR like C.Jennen-Steinmetz and S.Wellek (2005). A method to estimate sample size for QR based on power analysis was proposed by Shao and Wang (2009). In this paper, a new method is proposed to calculate sample size based on power analysis under hypothesis test of covariate effects. Even though error distribution assumption is not necessary for QR analysis itself, researchers have to make assumptions of error distribution and covariate structure in the planning stage of a study to obtain a reasonable estimate of sample size. In this project, both parametric and nonparametric methods are provided to estimate error distribution. Since the method proposed can be implemented in R, user is able to choose either parametric distribution or nonparametric kernel density estimation for error distribution. User also needs to specify the covariate structure and effect size to carry out sample size and power calculation. The performance of the method proposed is further evaluated using numerical simulation. The results suggest that the sample sizes obtained from our method provide empirical powers that are closed to the nominal power level, for example, 80%.