995 resultados para Split-operator Methods
Resumo:
The Pseudo-Spectral Time Domain (PSTD) method is an alternative time-marching method to classical leapfrog finite difference schemes inthe simulation of wave-like propagating phenomena. It is based on the fundamentals of the Fourier transform to compute the spatial derivativesof hyperbolic differential equations. Therefore, it results in an isotropic operator that can be implemented in an efficient way for room acousticssimulations. However, one of the first issues to be solved consists on modeling wall absorption. Unfortunately, there are no references in thetechnical literature concerning to that problem. In this paper, assuming real and constant locally reacting impedances, several proposals toovercome this problem are presented, validated and compared to analytical solutions in different scenarios.
Resumo:
A `next' operator, s, is built on the set R1=(0,1]-{ 1-1/e} defining a partial order that, with the help of the axiom of choice, can be extended to a total order in R1. Besides, the orbits {sn(a)}nare all dense in R1 and are constituted by elements of the samearithmetical character: if a is an algebraic irrational of degreek all the elements in a's orbit are algebraic of degree k; if a istranscendental, all are transcendental. Moreover, the asymptoticdistribution function of the sequence formed by the elements in anyof the half-orbits is a continuous, strictly increasing, singularfunction very similar to the well-known Minkowski's ?(×) function.
Resumo:
OBJECTIVES: To evaluate morbidity associated with the radial forearm free flap donor site and to compare functional and aesthetic outcomes of ulnar-based transposition flap (UBTF) vs split-thickness skin graft (STSG) closure of the donor site.¦DESIGN: Case-control study.¦SETTING: Tertiary care institution.¦PATIENTS: The inclusion criteria were flap size not exceeding 30 cm(2), patient availability for a single follow-up visit, and performance of surgery at least 6 months previously. Forty-four patients were included in the study and were reviewed. Twenty-two patients had UBTF closure, and 22 had STSG closure.¦MAIN OUTCOME MEASURES: Variables analyzed included wrist mobility, Michigan Hand Outcomes Questionnaire scores, pinch and grip strength (using a dynamometer), and hand sensitivity (using monofilament testing over the radial nerve distribution). In analyses of operated arms vs nonoperated arms, variables obtained only for the operated arms included Vancouver Scar Scale scores and visual analog scale scores for Aesthetics and Overall Arm Function.¦RESULTS: The mean (SD) wrist extension was significantly better in the UBTF group (56.0° [10.4°] for nonoperated arms and 62.0° [9.7°] for operated arms) than in the STSG group (59.0° [7.1°] for nonoperated arms and 58.4° [12.1°] for operated arms) (P = .02). The improvement in wrist range of motion for the UBTF group approached statistical significance (P = .07). All other variables (Michigan Hand Outcomes Questionnaire scores, pinch and grip strength, hand sensitivity, and visual analog scale scores) were significantly better for nonoperated arms vs operated arms, but no significant differences were observed between the UBTF and STSG groups.¦CONCLUSIONS: The radial forearm free flap donor site carries significant morbidity. Donor site UBTF closure was associated with improved wrist extension and represents an alternative method of closure for small donor site defects.
Resumo:
AASHTO has a standard test method for determining the specific gravity of aggregates. The people in the Aggregate Section of the Central Materials Laboratory perform the AASHTO T-85 test for AMRL inspections and reference samples. Iowa's test method 201B, for specific gravity determinations, requires more time and more care to perform than the AASHTO procedure. The major difference between the two procedures is that T-85 requires the sample to be weighed in water and 201B requires the 2 quart pycnometer jar. Efficiency in the Central Laboratory would be increased if the AASHTO procedure for coarse aggregate specific gravity determinations was adopted. The questions to be answered were: (1) Do the two procedures yield the same test results? (2) Do the two procedures yield the same precision? An experiment was conducted to study the different test methods. From the experimental results, specific gravity determinations by AASHTO T-85 method were found to correlate to those obtained by the Iowa 201B method with an R-squared value of 0.99. The absorption values correlated with an R-squared value of 0.98. The single operator precision was equivalent for the two methods. Hence, this procedure was recommended to be adopted in the Central Laboratory.
Resumo:
The multiscale finite-volume (MSFV) method has been derived to efficiently solve large problems with spatially varying coefficients. The fine-scale problem is subdivided into local problems that can be solved separately and are coupled by a global problem. This algorithm, in consequence, shares some characteristics with two-level domain decomposition (DD) methods. However, the MSFV algorithm is different in that it incorporates a flux reconstruction step, which delivers a fine-scale mass conservative flux field without the need for iterating. This is achieved by the use of two overlapping coarse grids. The recently introduced correction function allows for a consistent handling of source terms, which makes the MSFV method a flexible algorithm that is applicable to a wide spectrum of problems. It is demonstrated that the MSFV operator, used to compute an approximate pressure solution, can be equivalently constructed by writing the Schur complement with a tangential approximation of a single-cell overlapping grid and incorporation of appropriate coarse-scale mass-balance equations.
Resumo:
OBJECTIVE: (1) To quantify wear of two different denture tooth materials in vivo with two study designs, (2) to relate tooth variables to vertical loss. METHODS: Two different denture tooth materials had been used (experimental material=test; DCL=control). In study 1 (split-mouth, 6 test centers) 60 subjects received complete dentures, in study 2 (two-arm, 1 test center) 29 subjects. In study 1 the mandibular dentures were supported by implants in 33% of the subjects, in study 2 only in 3% of the subjects. Impressions of the dentures were taken and poured with improved stone at baseline and after 6, 12, 18 and 24 months. Each operator evaluated the wear subjectively. Wear analysis was carried out with a laser scanning device. Maximal vertical loss of the attrition zones was calculated for each tooth cusp and tooth. A mixed linear model was used to statistically analyse the logarithmically transformed wear data. RESULTS: Due to drop-outs and unmatchable casts, only 47 subjects of study 1 and 14 of study 2 completed the 2-year recall. Overall, 75% of all teeth present could be analysed. There was no statistically difference in the overall wear between the test and control material for either study 1 or study 2. The relative increase in wear over time was similar in both study designs. However, a strong subject effect and center effect were observed. The fixed factors included in the model (time, tooth, center, etc.) accounted for 43% of the variability, whereas the random subject effect accounted for another 30% of the variability, leaving about 28% of unexplained variability. More wear was consistently recorded in the maxillary teeth compared to the mandibular teeth and in the first molar teeth compared to the premolar teeth and the second molars. Likewise, the supporting cusps showed more wear than the non-supporting cusps. The amount of wear did not depend on whether or not the lower dentures were supported by implants. The subjective wear was correct in about 67% of the cases if it is postulated that a wear difference of 100μm should be subjectively detectable. SIGNIFICANCE: The clinical wear of denture teeth is highly variable with a strong patient effect. More wear can be expected in maxillary denture teeth compared to mandibular teeth, first molars compared to premolars and supported cusps compared to non-supported cusps. Laboratory data on the wear of denture tooth materials may not be confirmed in well-structured clinical trials probably due to the large inter-individual variability.
Resumo:
Paperin pinnan karheus on yksi paperin laatukriteereistä. Sitä mitataan fyysisestipaperin pintaa mittaavien laitteiden ja optisten laitteiden avulla. Mittaukset vaativat laboratorioolosuhteita, mutta nopeammille, suoraan linjalla tapahtuville mittauksilla olisi tarvetta paperiteollisuudessa. Paperin pinnan karheus voidaan ilmaista yhtenä näytteelle kohdistuvana karheusarvona. Tässä työssä näyte on jaettu merkitseviin alueisiin, ja jokaiselle alueelle on laskettu erillinen karheusarvo. Karheuden mittaukseen on käytetty useita menetelmiä. Yleisesti hyväksyttyä tilastollista menetelmää on käytetty tässä työssä etäisyysmuunnoksen lisäksi. Paperin pinnan karheudenmittauksessa on ollut tarvetta jakaa analysoitava näyte karheuden perusteella alueisiin. Aluejaon avulla voidaan rajata näytteestä selvästi karheampana esiintyvät alueet. Etäisyysmuunnos tuottaa alueita, joita on analysoitu. Näistä alueista on muodostettu yhtenäisiä alueita erilaisilla segmentointimenetelmillä. PNN -menetelmään (Pairwise Nearest Neighbor) ja naapurialueiden yhdistämiseen perustuvia algoritmeja on käytetty.Alueiden jakamiseen ja yhdistämiseen perustuvaa lähestymistapaa on myös tarkasteltu. Segmentoitujen kuvien validointi on yleensä tapahtunut ihmisen tarkastelemana. Tämän työn lähestymistapa on verrata yleisesti hyväksyttyä tilastollista menetelmää segmentoinnin tuloksiin. Korkea korrelaatio näiden tulosten välillä osoittaa onnistunutta segmentointia. Eri kokeiden tuloksia on verrattu keskenään hypoteesin testauksella. Työssä on analysoitu kahta näytesarjaa, joidenmittaukset on suoritettu OptiTopolla ja profilometrillä. Etäisyysmuunnoksen aloitusparametrit, joita muutettiin kokeiden aikana, olivat aloituspisteiden määrä ja sijainti. Samat parametrimuutokset tehtiin kaikille algoritmeille, joita käytettiin alueiden yhdistämiseen. Etäisyysmuunnoksen jälkeen korrelaatio oli voimakkaampaa profilometrillä mitatuille näytteille kuin OptiTopolla mitatuille näytteille. Segmentoiduilla OptiTopo -näytteillä korrelaatio parantui voimakkaammin kuin profilometrinäytteillä. PNN -menetelmän tuottamilla tuloksilla korrelaatio oli paras.
Resumo:
BACKGROUND: Chronic kidney disease (CKD) accelerates vascular stiffening related to age. Arterial stiffness may be evaluated measuring the carotid-femoral pulse wave velocity (PWV) or more simply, as recommended by KDOQI, monitoring pulse pressure (PP). Both correlate to survival and incidence of cardiovascular disease. PWV can also be estimated on the brachial artery using a Mobil-O-Graph; a non-operator dependent automatic device. The aim was to analyse whether, in a dialysis population, PWV obtained by Mobil-O-Graph (MogPWV) is more sensitive for vascular aging than PP. METHODS: A cohort of 143 patients from 4 dialysis units has been followed measuring MogPWV and PP every 3 to 6 months and compared to a control group with the same risk factors but an eGFR > 30 ml/min. RESULTS: MogPWV contrarily to PP did discriminate the dialysis population from the control group. The mean difference translated in age between the two populations was 8.4 years. The increase in MogPWV, as a function of age, was more rapid in the dialysis group. 13.3% of the dialysis patients but only 3.0% of the control group were outliers for MogPWV. The mortality rate (16 out of 143) was similar in outliers and inliers (7.4 and 8.0%/year). Stratifying patients according to MogPWV, a significant difference in survival was seen. A high parathormone (PTH) and to be dialysed for a hypertensive nephropathy were associated to a higher baseline MogPWV. CONCLUSIONS: Assessing PWV on the brachial artery using a Mobil-O-Graph is a valid and simple alternative, which, in the dialysis population, is more sensitive for vascular aging than PP. As demonstrated in previous studies PWV correlates to mortality. Among specific CKD risk factors only PTH is associated with a higher baseline PWV. TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT02327962.
Resumo:
Langattomat lähiverkot ovat viime vuosikymmeninä saavuttaneet suuren suosion. Tässä työssä käsitellään käyttäjien todentamisjärjestelmän suunnittelua ja kehitystä langattomaan monioperaattoriverkkoon. Langattomassa monioperaattoriverkossa käyttäjillä on mahdollisuus käyttää eri operaattoreiden palveluita. Aluksi käsitellään olemassa olevia todentamismenetelmiä ja -järjestelmiä. minkä jälkeen kuvaillaan todentamisjärjestelmä langattomille monioperaattoriverkoille. Todentamisjärjestelmän ratkaisuvaihtoehtoja esitellään kaksi, niin sanotut moni- istunto - ja yksittäisistuntomalli. Moni-istuntomalli on normaali lähestymistapa käyttäjien todentamiseen tietokonejärjestelmissä. Siinä käyttäjän pitää tunnistautua ja todentaa itsensä jokaiselle verkon palvelulle erikseen. Yksittäisistuntomallissa pyritään parempaan luotettavuuteen ja käytettävyyteen. Siinä käyttäjä todentaa itsensä vain kerran ja voi sen jälkeen päästä useisiin palveluihin. Työn loppuosassa kuvaillaan suunnitellun järjestelmän toteutusta. Lisäksi ehdotetaan vaihtoehtoisia toteutustapoja, analysoidaan järjestelmän heikkouksia ja kerrotaan jatkokehitysmahdoillisuuksista.
Resumo:
PURPOSE: Congenital hypogonadotropic hypogonadism (CHH) and split hand/foot malformation (SHFM) are two rare genetic conditions. Here we report a clinical entity comprising the two. METHODS: We identified patients with CHH and SHFM through international collaboration. Probands and available family members underwent phenotyping and screening for FGFR1 mutations. The impact of identified mutations was assessed by sequence- and structure-based predictions and/or functional assays. RESULTS: We identified eight probands with CHH with (n = 3; Kallmann syndrome) or without anosmia (n = 5) and SHFM, seven of whom (88%) harbor FGFR1 mutations. Of these seven, one individual is homozygous for p.V429E and six individuals are heterozygous for p.G348R, p.G485R, p.Q594*, p.E670A, p.V688L, or p.L712P. All mutations were predicted by in silico analysis to cause loss of function. Probands with FGFR1 mutations have severe gonadotropin-releasing hormone deficiency (absent puberty and/or cryptorchidism and/or micropenis). SHFM in both hands and feet was observed only in the patient with the homozygous p.V429E mutation; V429 maps to the fibroblast growth factor receptor substrate 2α binding domain of FGFR1, and functional studies of the p.V429E mutation demonstrated that it decreased recruitment and phosphorylation of fibroblast growth factor receptor substrate 2α to FGFR1, thereby resulting in reduced mitogen-activated protein kinase signaling. CONCLUSION: FGFR1 should be prioritized for genetic testing in patients with CHH and SHFM because the likelihood of a mutation increases from 10% in the general CHH population to 88% in these patients.Genet Med 17 8, 651-659.
Resumo:
Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levels
Resumo:
Mobility of atrazine in soil has contributed to the detection of levels above the legal limit in surface water and groundwater in Europe and the United States. The use of new formulations can reduce or minimize the impacts caused by the intensive use of this herbicide in Brazil, mainly in regions with higher agricultural intensification. The objective of this study was to compare the leaching of a commercial formulation of atrazine (WG) with a controlled release formulation (xerogel) using bioassay and chromatographic methods of analysis. The experiment was a split plot randomized block design with four replications, in a (2 x 6) + 1 arrangement. The main formulations of atrazine (WG and xerogel) were allocated in the plots, and the herbicide concentrations (0, 3200, 3600, 4200, 5400 and 8000 g ha-1), in the subplots. Leaching was determined comparatively by using bioassays with oat and chromatographic analysis. The results showed a greater concentration of the herbicide in the topsoil (0-4 cm) in the treatment with the xerogel formulation in comparison with the commercial formulation, which contradicts the results obtained with bioassays, probably because the amount of herbicide available for uptake by plants in the xerogel formulation is less than that available in the WG formulation.
Resumo:
The aim of this thesis is to propose a novel control method for teleoperated electrohydraulic servo systems that implements a reliable haptic sense between the human and manipulator interaction, and an ideal position control between the manipulator and the task environment interaction. The proposed method has the characteristics of a universal technique independent of the actual control algorithm and it can be applied with other suitable control methods as a real-time control strategy. The motivation to develop this control method is the necessity for a reliable real-time controller for teleoperated electrohydraulic servo systems that provides highly accurate position control based on joystick inputs with haptic capabilities. The contribution of the research is that the proposed control method combines a directed random search method and a real-time simulation to develop an intelligent controller in which each generation of parameters is tested on-line by the real-time simulator before being applied to the real process. The controller was evaluated on a hydraulic position servo system. The simulator of the hydraulic system was built based on Markov chain Monte Carlo (MCMC) method. A Particle Swarm Optimization algorithm combined with the foraging behavior of E. coli bacteria was utilized as the directed random search engine. The control strategy allows the operator to be plugged into the work environment dynamically and kinetically. This helps to ensure the system has haptic sense with high stability, without abstracting away the dynamics of the hydraulic system. The new control algorithm provides asymptotically exact tracking of both, the position and the contact force. In addition, this research proposes a novel method for re-calibration of multi-axis force/torque sensors. The method makes several improvements to traditional methods. It can be used without dismantling the sensor from its application and it requires smaller number of standard loads for calibration. It is also more cost efficient and faster in comparison to traditional calibration methods. The proposed method was developed in response to re-calibration issues with the force sensors utilized in teleoperated systems. The new approach aimed to avoid dismantling of the sensors from their applications for applying calibration. A major complication with many manipulators is the difficulty accessing them when they operate inside a non-accessible environment; especially if those environments are harsh; such as in radioactive areas. The proposed technique is based on design of experiment methodology. It has been successfully applied to different force/torque sensors and this research presents experimental validation of use of the calibration method with one of the force sensors which method has been applied to.
Resumo:
We propose finite sample tests and confidence sets for models with unobserved and generated regressors as well as various models estimated by instrumental variables methods. The validity of the procedures is unaffected by the presence of identification problems or \"weak instruments\", so no detection of such problems is required. We study two distinct approaches for various models considered by Pagan (1984). The first one is an instrument substitution method which generalizes an approach proposed by Anderson and Rubin (1949) and Fuller (1987) for different (although related) problems, while the second one is based on splitting the sample. The instrument substitution method uses the instruments directly, instead of generated regressors, in order to test hypotheses about the \"structural parameters\" of interest and build confidence sets. The second approach relies on \"generated regressors\", which allows a gain in degrees of freedom, and a sample split technique. For inference about general possibly nonlinear transformations of model parameters, projection techniques are proposed. A distributional theory is obtained under the assumptions of Gaussian errors and strictly exogenous regressors. We show that the various tests and confidence sets proposed are (locally) \"asymptotically valid\" under much weaker assumptions. The properties of the tests proposed are examined in simulation experiments. In general, they outperform the usual asymptotic inference methods in terms of both reliability and power. Finally, the techniques suggested are applied to a model of Tobin’s q and to a model of academic performance.
Resumo:
Biometrics has become important in security applications. In comparison with many other biometric features, iris recognition has very high recognition accuracy because it depends on iris which is located in a place that still stable throughout human life and the probability to find two identical iris's is close to zero. The identification system consists of several stages including segmentation stage which is the most serious and critical one. The current segmentation methods still have limitation in localizing the iris due to circular shape consideration of the pupil. In this research, Daugman method is done to investigate the segmentation techniques. Eyelid detection is another step that has been included in this study as a part of segmentation stage to localize the iris accurately and remove unwanted area that might be included. The obtained iris region is encoded using haar wavelets to construct the iris code, which contains the most discriminating feature in the iris pattern. Hamming distance is used for comparison of iris templates in the recognition stage. The dataset which is used for the study is UBIRIS database. A comparative study of different edge detector operator is performed. It is observed that canny operator is best suited to extract most of the edges to generate the iris code for comparison. Recognition rate of 89% and rejection rate of 95% is achieved