951 resultados para Garment sizes.
Resumo:
Sn-Ag-Cu (SAC) solder alloys are the best Pb free alternative for electronic industry. Since their introduction, efforts are made to improve their efficacies by tuning the processing and composition to achieve lower melting point and better wettability. Nanostructured alloys with large boundary content are known to depress the melting points of metals and alloys. In this article we explore this possibility by processing prealloyed SAC alloys close to SAC305 composition (Sn-3wt%Ag-0.5wt%Cu) by mechanical milling which results in the formation of nanostructured alloys. Pulverisette ball mill (P7) and Vibratory ball mills are used to carry out the milling of the powders at room temperature and at lower temperatures (-104 A degrees C), respectively. We report a relatively smaller depression of melting point ranging up to 5 A degrees C with respect to original alloys. The minimum grain sizes achieved and the depression of melting point are similar for both room temperature and low-temperature processed samples. An attempt has been made to rationalize the observations in terms of the basic processes occurring during the milling.
Resumo:
We report three prominent observations made on the nanoscale charge ordered ( CO) manganites RE(1-x)AE(x)MnO(3) (RE = Nd, Pr; AE = Ca; x = 0.5) probed by temperature dependent magnetization and magneto-transport, coupled with electron magnetic/paramagnetic resonance spectroscopy (EMR/EPR). First, evidence is presented to show that the predominant ground state magnetic phase in nanoscale CO manganites is ferromagnetic and it coexists with a residual anti-ferromagnetic phase. Secondly, the shallow minimum in the temperature dependence of the EPR linewidth shows the presence of a charge ordered phase in nanoscale manganites which was shown to be absent from the DC static magnetization and transport measurements. Thirdly, the EPR linewidth, reflective of spin dynamics, increases significantly with a decrease of particle size in CO manganites. We discuss the interesting observations made on various samples of different particle sizes and give possible explanations. We have shown that EMR spectroscopy is a highly useful technique to probe the 'hindered charge ordered phase' in nanoscale CO manganites, which is not possible by static DC magnetization and transport measurements.
Resumo:
Spatial data analysis has become more and more important in the studies of ecology and economics during the last decade. One focus of spatial data analysis is how to select predictors, variance functions and correlation functions. However, in general, the true covariance function is unknown and the working covariance structure is often misspecified. In this paper, our target is to find a good strategy to identify the best model from the candidate set using model selection criteria. This paper is to evaluate the ability of some information criteria (corrected Akaike information criterion, Bayesian information criterion (BIC) and residual information criterion (RIC)) for choosing the optimal model when the working correlation function, the working variance function and the working mean function are correct or misspecified. Simulations are carried out for small to moderate sample sizes. Four candidate covariance functions (exponential, Gaussian, Matern and rational quadratic) are used in simulation studies. With the summary in simulation results, we find that the misspecified working correlation structure can still capture some spatial correlation information in model fitting. When the sample size is large enough, BIC and RIC perform well even if the the working covariance is misspecified. Moreover, the performance of these information criteria is related to the average level of model fitting which can be indicated by the average adjusted R square ( [GRAPHICS] ), and overall RIC performs well.
Resumo:
We consider ranked-based regression models for clustered data analysis. A weighted Wilcoxon rank method is proposed to take account of within-cluster correlations and varying cluster sizes. The asymptotic normality of the resulting estimators is established. A method to estimate covariance of the estimators is also given, which can bypass estimation of the density function. Simulation studies are carried out to compare different estimators for a number of scenarios on the correlation structure, presence/absence of outliers and different correlation values. The proposed methods appear to perform well, in particular, the one incorporating the correlation in the weighting achieves the highest efficiency and robustness against misspecification of correlation structure and outliers. A real example is provided for illustration.
Resumo:
In closed-die forging the flash geometry should be such as to ensure that the cavity is completely filled just as the two dies come into contact at the parting plane. If metal is caused to extrude through the flash gap as the dies approach the point of contact — a practice generally resorted to as a means of ensuring complete filling — dies are unnecessarily stressed in a high-stress regime (as the flash is quite thin and possibly cooled by then), which reduces the die life and unnecessarily increases the energy requirement of the operation. It is therefore necessary to carefully determine the dimensions of the flash land and flash thickness — the two parameters, apart from friction at the land, which control the lateral flow. The dimensions should be such that the flow into the longitudinal cavity is controlled throughout the operation, ensuring complete filling just as the dies touch at the parting plane. The design of the flash must be related to the shape and size of the forging cavity as the control of flow has to be exercised throughout the operation: it is possible to do this if the mechanics of how the lateral extrusion into the flash takes place is understood for specific cavity shapes and sizes. The work reported here is part of an ongoing programme investigating flow in closed-die forging. A simple closed shape (no longitudinal flow) which may correspond to the last stages of a real forging operation is analysed using the stress equilibrium approach. Metal from the cavity (flange) flows into the flash by shearing in the cavity in one of the three modes considered here: for a given cavity the mode with the least energy requirement is assumed to be the most realistic. On this basis a map has been developed which, given the depth and width of the cavity as well as the flash thickness, will tell the designer of the most likely mode (of the three modes considered) in which metal in the cavity will shear and then flow into the flash gap. The results of limited set of experiments, reported herein, validate this method of selecting the optimum model of flow into the flash gap.
Resumo:
Yao, Begg, and Livingston (1996, Biometrics 52, 992-1001) considered the optimal group size for testing a series of potentially therapeutic agents to identify a promising one as soon as possible for given error rates. The number of patients to be tested with each agent was fixed as the group size. We consider a sequential design that allows early acceptance and rejection, and we provide an optimal strategy to minimize the sample sizes (patients) required using Markov decision processes. The minimization is under the constraints of the two types (false positive and false negative) of error probabilities, with the Lagrangian multipliers corresponding to the cost parameters for the two types of errors. Numerical studies indicate that there can be a substantial reduction in the number of patients required.
Resumo:
The method of generalised estimating equations for regression modelling of clustered outcomes allows for specification of a working matrix that is intended to approximate the true correlation matrix of the observations. We investigate the asymptotic relative efficiency of the generalised estimating equation for the mean parameters when the correlation parameters are estimated by various methods. The asymptotic relative efficiency depends on three-features of the analysis, namely (i) the discrepancy between the working correlation structure and the unobservable true correlation structure, (ii) the method by which the correlation parameters are estimated and (iii) the 'design', by which we refer to both the structures of the predictor matrices within clusters and distribution of cluster sizes. Analytical and numerical studies of realistic data-analysis scenarios show that choice of working covariance model has a substantial impact on regression estimator efficiency. Protection against avoidable loss of efficiency associated with covariance misspecification is obtained when a 'Gaussian estimation' pseudolikelihood procedure is used with an AR(1) structure.
Resumo:
With the aim of finding simple methods for the fabrication of He II refilling devices, He II flow has been studied through filters made from various fine powders (oxides and metals, grain sizes in the range 0.05–2 μm) by compacting them under pressure. The results obtained for the different states of He II flow, especially in the “breakthrough” and “easy flow” range, are explained by the fountain effect, He II hydrodynamics and the choking effect. According to the results, pressedpowder filters can be classified into three groups with different flow characteristics, of which the “good transfer filters” with a behaviour neatly described by simple theory are suitable for use in He II refilling devices.
Resumo:
This paper presents two approximate analytical expressions for nonlinear electric fields in the principal direction in axially symmetric (3D) and two dimensional (2D) ion trap mass analysers with apertures (holes in case of 3D traps and slits in case of 2D traps) on the electrodes. Considered together (3D and 2D), we present composite approximations for the principal unidirectional nonlinear electric fields in these ion traps. The composite electric field E has the form E = E-noaperture + E-aperture. where E-noaperture is the field within an imagined trap which is identical to the practical trap except that the apertures are missing and E-aperture is the field contribution due to apertures on the two trap electrodes. The field along the principal axis, of the trap can in this way be well approximated for any aperture that is not too large. To derive E-aperture. classical results of electrostatics have been extended to electrodes with finite thickness and different aperture shapes.E-noaperture is a modified truncated multipole expansion for the imagined trap with no aperture. The first several terms in the multipole expansion are in principle exact(though numerically determined using the BEM), while the last term is chosen to match the field at the electrode. This expansion, once Computed, works with any aperture in the practical trap. The composite field approximation for axially symmetric (3D) traps is checked for three geometries: the Paul trap, the cylindrical ion trap (CIT) and an arbitrary other trap. The approximation for 2D traps is verified using two geometries: the linear ion trap (LIT) and the rectilinear ion trap (RIT). In each case, for two aperture sizes (10% and 50% of the trap dimension), highly satisfactory fits are obtained. These composite approximations may be used in more detailed nonlinear ion dynamics Studies than have been hitherto attempted. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we have probed the origin of SHG in copper nanoparticles by polarization-resolved hyper-Rayleigh scattering (HRS). Results obtained with various sizes of copper nanoparticles at four different wavelengths covering the wavelength range 738-1907 nm reveal that the origin of second harmonic generation (SHG) in these particles is purely dipolar in nature as long as the size (d) of the particles remains smaller compared to the wavelength (;.) of light ("small-particle limit"). However, contribution of the higher order multipoles coupled with retardation effect becomes apparent with an increase in the d/lambda ratio. We have identified the "small-particle limit" in the second harmonic generation from noble metal nanoparticles by evaluating the critical d/lambda ratio at which the retardation effect sets in the noble metal nanoparticles. We have found that the second-order nonlinear optical property of copper nanoparticles closely resembles that of gold, but not that of silver. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The common focus of the studies brought together in this work is the prosodic segmentation of spontaneous speech. The theoretically most central aspect is the introduction and further development of the IJ-model of intonational chunking. The study consists of a general introduction and five detailed studies that approach prosodic chunking from different perspectives. The data consist of recordings of face-to-face interaction in several spoken varieties of Finnish and Finland Swedish; the methodology is usage-based and qualitative. The term “speech prosody” refers primarily to the melodic and rhythmic characteristics of speech. Both speaking and understanding speech require the ability to segment the flow of speech into suitably sized prosodic chunks. In order to be usage-based, a study of spontaneous speech consequently needs to be based on material that is segmented into prosodic chunks of various sizes. The segmentation is seen to form a hierarchy of chunking. The prosodic models that have so far been developed and employed in Finland have been based on sentences read aloud, which has made it difficult to apply these models in the analysis of spontaneous speech. The prosodic segmentation of spontaneous speech has not previously been studied in detail in Finland. This research focuses mainly on the following three questions: (1) What are the factors that need to be considered when developing a model of prosodic segmentation of speech, so that the model can be employed regardless of the language or dialect under analysis? (2) What are the characteristics of a prosodic chunk, and what are the similarities in the ways chunks of different languages and varieties manifest themselves that will make it possible to analyze different data according to the same criteria? (3) How does the IJ-model of intonational chunking introduced as a solution to question (1) function in practice in the study of different varieties of Finnish and Finland Swedish? The boundaries of the prosodic chunks were manually marked in the material according to context-specific acoustic and auditory criteria. On the basis of the data analyzed, the IJ-model was further elaborated and implemented, thus allowing comparisons between different language varieties. On the basis of the empirical comparisons, a prosodic typology is presented for the dialects of Swedish in Finland. The general contention is that the principles of the IJ-model can readily be used as a methodological tool for prosodic analysis irrespective of language varieties.
Resumo:
The article describes a generalized estimating equations approach that was used to investigate the impact of technology on vessel performance in a trawl fishery during 1988-96, while accounting for spatial and temporal correlations in the catch-effort data. Robust estimation of parameters in the presence of several levels of clustering depended more on the choice of cluster definition than on the choice of correlation structure within the cluster. Models with smaller cluster sizes produced stable results, while models with larger cluster sizes, that may have had complex within-cluster correlation structures and that had within-cluster covariates, produced estimates sensitive to the correlation structure. The preferred model arising from this dataset assumed that catches from a vessel were correlated in the same years and the same areas, but independent in different years and areas. The model that assumed catches from a vessel were correlated in all years and areas, equivalent to a random effects term for vessel, produced spurious results. This was an unexpected finding that highlighted the need to adopt a systematic strategy for modelling. The article proposes a modelling strategy of selecting the best cluster definition first, and the working correlation structure (within clusters) second. The article discusses the selection and interpretation of the model in the light of background knowledge of the data and utility of the model, and the potential for this modelling approach to apply in similar statistical situations.
Resumo:
The size at recruitment, temporal and spatial distribution, and abiotic factors influencing abundance of three commercially important species of penaeid prawns in the sublittoral trawl grounds of Moreton Bay (Queensland, Australia) were compared. Metapenaeus bennettae and Penaeus plebejus recruit to the trawl grounds at sizes which are relatively small (14-15 mm carapace length, CL) and below that at which prawns are selected for, and retained, in the fleet's cod-ends. In contrast, Penaeus esculenlus recruit at the relatively large size of 27 mm CL from February to May, well above the size ranges selected for. Recruitment of M. bennettae extends over several months, September-October and February March, and was thus likely to be bi-annual, while the recruitment period of P. plebejus was distinct, peaking in October-November each year. Size classes of M . bennettae were the most spatially stratified of the three species. Catch rates of recruits were negatively correlated with depth for all three species, and were also negatively correlated with salinity for M. bennettae.
Resumo:
We explore the use of Gittins indices to search for near optimality in sequential clinical trials. Some adaptive allocation rules are proposed to achieve the following two objectives as far as possible: (i) to reduce the expected successes lost, (ii) to minimize the error probability at the end. Simulation results indicate the merits of the rules based on Gittins indices for small trial sizes. The rules are generalized to the case when neither of the response densities is known. Asymptotic optimality is derived for the constrained rules. A simple allocation rule is recommended for one-stage models. The simulation results indicate that it works better than both equal allocation and Bather's randomized allocation. We conclude with a discussion of possible further developments.
Resumo:
Consumer risk assessment is a crucial step in the regulatory approval of pesticide use on food crops. Recently, an additional hurdle has been added to the formal consumer risk assessment process with the introduction of short-term intake or exposure assessment and a comparable short-term toxicity reference, the acute reference dose. Exposure to residues during one meal or over one day is important for short-term or acute intake. Exposure in the short term can be substantially higher than average because the consumption of a food on a single occasion can be very large compared with typical long-term or mean consumption and the food may have a much larger residue than average. Furthermore, the residue level in a single unit of a fruit or vegetable may be higher by a factor (defined as the variability factor, which we have shown to be typically ×3 for the 97.5th percentile unit) than the average residue in the lot. Available marketplace data and supervised residue trial data are examined in an investigation of the variability of residues in units of fruit and vegetables. A method is described for estimating the 97.5th percentile value from sets of unit residue data. Variability appears to be generally independent of the pesticide, the crop, crop unit size and the residue level. The deposition of pesticide on the individual unit during application is probably the most significant factor. The diets used in the calculations ideally come from individual and household surveys with enough consumers of each specific food to determine large portion sizes. The diets should distinguish the different forms of a food consumed, eg canned, frozen or fresh, because the residue levels associated with the different forms may be quite different. Dietary intakes may be calculated by a deterministic method or a probabilistic method. In the deterministic method the intake is estimated with the assumptions of large portion consumption of a ‘high residue’ food (high residue in the sense that the pesticide was used at the highest recommended label rate, the crop was harvested at the smallest interval after treatment and the residue in the edible portion was the highest found in any of the supervised trials in line with these use conditions). The deterministic calculation also includes a variability factor for those foods consumed as units (eg apples, carrots) to allow for the elevated residue in some single units which may not be seen in composited samples. In the probabilistic method the distribution of dietary consumption and the distribution of possible residues are combined in repeated probabilistic calculations to yield a distribution of possible residue intakes. Additional information such as percentage commodity treated and combination of residues from multiple commodities may be incorporated into probabilistic calculations. The IUPAC Advisory Committee on Crop Protection Chemistry has made 11 recommendations relating to acute dietary exposure.