630 resultados para wilhite, Clayton
Resumo:
Background Adolescent Idiopathic Scoliosis is the most common type of spinal deformity, and whilst the risk of progression appears to be biomechanically mediated (larger deformities are more likely to progress), the detailed biomechanical mechanisms driving progression are not well understood. Gravitational forces in the upright position are the primary sustained loads experienced by the spine. In scoliosis they are asymmetrical, generating moments about the spinal joints which may promote asymmetrical growth and deformity progression. Using 3D imaging modalities to estimate segmental torso masses allows the gravitational loading on the scoliotic spine to be determined. The resulting distribution of joint moments aids understanding of the mechanics of scoliosis progression. Methods Existing low-dose CT scans were used to estimate torso segment masses and joint moments for 20 female scoliosis patients. Intervertebral joint moments at each vertebral level were found by summing the moments of each of the torso segment masses above the required joint. Results The patients’ mean age was 15.3 years (SD 2.3; range 11.9 – 22.3 years); mean thoracic major Cobb angle 52° (SD 5.9°; range 42°-63°) and mean weight 57.5 kg (SD 11.5 kg; range 41 – 84.7 kg). Joint moments of up to 7 Nm were estimated at the apical level. No significant correlation was found between the patients’ major Cobb angles and apical joint moments. Conclusions Patients with larger Cobb angles do not necessarily have higher joint moments, and curve shape is an important determinant of joint moment distribution. These findings may help to explain the variations in progression between individual patients. This study suggests that substantial corrective forces are required of either internal instrumentation or orthoses to effectively counter the gravity-induced moments acting to deform the spinal joints of idiopathic scoliosis patients.
Resumo:
This Paper deals with the analysis of liquid limit of soils, an inferential parameter of universal acceptance. It has been undertaken primarily to re-examine one-point methods of determination of liquid limit water contents. It has been shown by basic characteristics of soils and associated physico-chemical factors that critical shear strengths at liquid limit water contents arise out of force field equilibrium and are independent of soil type. This leads to the formation of a scientific base for liquid limit determination by one-point methods, which hitherto was formulated purely on statistical analysis of data. Available methods (Norman, 1959; Karlsson, 1961; Clayton & Jukes, 1978) of one-point liquid limit determination have been critically re-examined. A simple one-point cone penetrometer method of computing liquid limit has been suggested and compared with other methods. Experimental data of Sherwood & Ryley (1970) have been employed for comparison of different cone penetration methods. Results indicate that, apart from mere statistical considerations, one-point methods have a strong scientific base on the uniqueness of modified flow line irrespective of soil type. Normalized flow line is obtained by normalization of water contents by liquid limit values thereby nullifying the effects of surface areas and associated physico-chemical factors that are otherwise reflected in different responses at macrolevel.Cet article traite de l'analyse de la limite de liquidité des sols, paramètre déductif universellement accepté. Cette analyse a été entreprise en premier lieu pour ré-examiner les méthodes à un point destinées à la détermination de la teneur en eau à la limite de liquidité. Il a été démontré par les caractéristiques fondamentales de sols et par des facteurs physico-chimiques associés que les résistances critiques à la rupture au cisaillement pour des teneurs en eau à la limite de liquidité résultent de l'équilibre des champs de forces et sont indépendantes du type de sol concerné. On peut donc constituer une base scientifique pour la détermination de la limite de liquidité par des méthodes à un point lesquelles, jusqu'alors, n'avaient été formulées que sur la base d'une analyse statistique des données. Les méthodes dont on dispose (Norman, 1959; Karlsson, 1961; Clayton & Jukes, 1978) pour la détermination de la limite de liquidité à un point font l'objet d'un ré-examen critique. Une simple méthode d'analyse à un point à l'aide d'un pénétromètre à cône pour le calcul de la limite de liquidité a été suggérée et comparée à d'autres méthodes. Les données expérimentales de Sherwood & Ryley (1970) ont été utilisées en vue de comparer différentes méthodes de pénétration par cône. En plus de considérations d'ordre purement statistque, les résultats montrent que les méthodes de détermination à un point constituent une base scientifique solide en raison du caractère unique de la ligne de courant modifiée, quel que soit le type de sol La ligne de courant normalisée est obtenue par la normalisation de la teneur en eau en faisant appel à des valeurs de limite de liquidité pour, de cette manière, annuler les effets des surfaces et des facteurs physico-chimiques associés qui sans cela se manifesteraient dans les différentes réponses au niveau macro.
Resumo:
Natural convection in rectangular two-dimensional cavities with differentially heated side walls is a standard problem in numerical heat transfer. Most of the existing studies has considered the low Ra laminar regime. The general thrust of the present research is to investigate higher Ra flows extending into the unsteady and turbulent regimes where the physics is not fully understood and appropriate models for turbulence are not yet established. In the present study the Boussinesq approximation is being used, but the theoretical background and some preliminary results have been obtained[1] for flows with variable properties.
Resumo:
The interdependence of Greece and other European stock markets and the subsequent portfolio implications are examined in wavelet and variational mode decomposition domain. In applying the decomposition techniques, we analyze the structural properties of data and distinguish between short and long term dynamics of stock market returns. First, the GARCH-type models are fitted to obtain the standardized residuals. Next, different copula functions are evaluated, and based on the conventional information criteria and time varying parameter, Joe-Clayton copula is chosen to model the tail dependence between the stock markets. The short-run lower tail dependence time paths show a sudden increase in comovement during the global financial crises. The results of the long-run dependence suggest that European stock markets have higher interdependence with Greece stock market. Individual country’s Value at Risk (VaR) separates the countries into two distinct groups. Finally, the two-asset portfolio VaR measures provide potential markets for Greece stock market investment diversification.
Resumo:
Most human ACTA1 skeletal actin gene mutations cause dominant, congenital myopathies often with severely reduced muscle function and neonatal mortality. High sequence conservation of actin means many mutated ACTA1 residues are identical to those in the Drosophila Act88F, an indirect flight muscle specific sarcomeric actin. Four known Act88F mutations occur at the same actin residues mutated in ten ACTA1 nemaline mutations, A138D/P, R256H/L, G268C/D/R/S and R372C/S. These Act88F mutants were examined for similar muscle phenotypes. Mutant homozygotes show phenotypes ranging from a lack of myofibrils to almost normal sarcomeres at eclosion. Aberrant Z-disc-like structures and serial Z-disc arrays, ‘zebra bodies’, are observed in homozygotes and heterozygotes of all four Act88F mutants. These electron-dense structures show homologies to human nemaline bodies/rods, but are much smaller than those typically found in the human myopathy. We conclude that the Drosophila indirect flight muscles provide a good model system for studying ACTA1 mutations.
Resumo:
Tässä työssä tarkastellaan emergenssiä luonnontieteellisenä, metafyysisenä ja teologisena käsitteenä, ja erityisesti Philip Claytonin emergenssiteoriaa. Tutkimuskohteena on Philip Claytonin kirja Mind and Emergence From Quantum to Consciousness ja tutkimusmetodina käsitteellinen analyysi. Kun emergenssiä tarkastellaan metafyysisenä teoriana, se on mahdollista nähdä fysikalismille ja dualismille vaihtoehtoisena filosofisena katsantokantana todellisuuteen. Fysikalismi on havaittavaa todellisuutta koskeva näkemys, jonka mukaan kaikki ilmiöt ovat lopulta palautettavissa fysiikkaan ja selitettävissä fysiikan käsittein. Dualismi taas on näkemys, että erityisesti ihmisen mieli ja tietoisuus ovat jotain fysikaaliseen maailmaan nähden täysin erilaista ja siitä riippumatonta. Emergenssiteoriassa maailman nähdään kehittyvänä kokonaisuutena, jossa kompleksisuuden kasvun myötä syntyy jatkuvasti uusia ominaisuuksia. Nämä ominaisuudet eivät ole palautettavissa niihin rakenteisiin, joiden pohjalta ne ovat syntyneet. Niiden syntymistä ei ole mahdollista luonnontieteellisin tai muinkaan menetelmin ennakoida. Emergenssin heikossa muodossa tämä ennakoimattomuus on vain episteeminen, tiedon mahdollisuuksiin liittyvä rajoitus, ja kausaalisia vaikutuksia esiintyy ainoastaan fysikaalisten objektien tasolla. Metafyysinen teoria edellyttää kuitenkin vahvaa emergenssiä, jossa syntyvillä ominaisuuksilla on fysikaalisesta perustastaan riippumatonta kausaalista vaikutusta. Tätä kutsutaan alaspäin -kausaalisuudeksi. Käsitys todellisuudesta on monistinen ja holistinen. Kaikki olemassa oleva on kehittynyt samasta aineksesta, mutta ei ole palautettavissa siihen. Maailmassa on ontologisesti toisistaan poikkeavia todellisuuden tasoja. Todellisuus on enemmän kuin osiensa summa. Emergenssille on tarjottu myös teologisia sovelluksia. Kun fysikalismissa uskonnolliselle uskolle ei jää tilaa ja dualismi on luonnontieteen näkökulmasta ongelmallinen, emergenssistä on etsitty filosofista viitekehystä, jossa luonnontiede ja uskonto olisivat sovitettavissa yhteen. Ensimmäisessä luvussa käsitellään fysikalismia ja dualismia ja erityisesti filosofista ja luonnontieteellistä kritiikkiä, jota niitä vastaan voidaan esittää. Toisessa luvussa tarkastellaan erilaisia mahdollisuuksia määritellä emergenssin käsite ja esitellään erityisesti Philip Claytonin käsityksiä heikosta ja vahvasta emergenssistä, emergenssistä luonnontieteissä sekä ihmismielestä fysikaaliseen maailmaan nähden emergenttinä ilmiönä. Emergenssi mainitaan monien sellaisten ilmiöiden yhteydessä, joita on pyritty tutkimaan kaaosteorian ja kompleksisuuden tutkimuksen keinoin. Työn kolmannessa luvussa pyritään antamaan kuva emergenssin suhteesta näihin teorioihin ja ylipäätään moderniin fysiikkaan. Philip Clayton on luonut emergenssiteorian pohjalta myös teologisen teorian, jossa käsitys Jumalasta luojana ja Jumalan vaikutus maailmassa pyritään sovittamaan yhteen emergentin todellisuuskäsityksen kanssa. Tätä teologista teoriaa esitellään ja arvioidaan työn neljännessä luvussa.
Resumo:
Recent focus of flood frequency analysis (FFA) studies has been on development of methods to model joint distributions of variables such as peak flow, volume, and duration that characterize a flood event, as comprehensive knowledge of flood event is often necessary in hydrological applications. Diffusion process based adaptive kernel (D-kernel) is suggested in this paper for this purpose. It is data driven, flexible and unlike most kernel density estimators, always yields a bona fide probability density function. It overcomes shortcomings associated with the use of conventional kernel density estimators in FFA, such as boundary leakage problem and normal reference rule. The potential of the D-kernel is demonstrated by application to synthetic samples of various sizes drawn from known unimodal and bimodal populations, and five typical peak flow records from different parts of the world. It is shown to be effective when compared to conventional Gaussian kernel and the best of seven commonly used copulas (Gumbel-Hougaard, Frank, Clayton, Joe, Normal, Plackett, and Student's T) in estimating joint distribution of peak flow characteristics and extrapolating beyond historical maxima. Selection of optimum number of bins is found to be critical in modeling with D-kernel.
Resumo:
Overland rain retrieval using spaceborne microwave radiometer offers a myriad of complications as land presents itself as a radiometrically warm and highly variable background. Hence, land rainfall algorithms of the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) have traditionally incorporated empirical relations of microwave brightness temperature (Tb) with rain rate, rather than relying on physically based radiative transfer modeling of rainfall (as implemented in the TMI ocean algorithm). In this paper, sensitivity analysis is conducted using the Spearman rank correlation coefficient as benchmark, to estimate the best combination of TMI low-frequency channels that are highly sensitive to the near surface rainfall rate from the TRMM Precipitation Radar (PR). Results indicate that the TMI channel combinations not only contain information about rainfall wherein liquid water drops are the dominant hydrometeors but also aid in surface noise reduction over a predominantly vegetative land surface background. Furthermore, the variations of rainfall signature in these channel combinations are not understood properly due to their inherent uncertainties and highly nonlinear relationship with rainfall. Copula theory is a powerful tool to characterize the dependence between complex hydrological variables as well as aid in uncertainty modeling by ensemble generation. Hence, this paper proposes a regional model using Archimedean copulas, to study the dependence of TMI channel combinations with respect to precipitation, over the land regions of Mahanadi basin, India, using version 7 orbital data from the passive and active sensors on board TRMM, namely, TMI and PR. Studies conducted for different rainfall regimes over the study area show the suitability of Clayton and Gumbel copulas for modeling convective and stratiform rainfall types for the majority of the intraseasonal months. Furthermore, large ensembles of TMI Tb (from the most sensitive TMI channel combination) were generated conditional on various quantiles (25th, 50th, 75th, and 95th) of the convective and the stratiform rainfall. Comparatively greater ambiguity was observed to model extreme values of the convective rain type. Finally, the efficiency of the proposed model was tested by comparing the results with traditionally employed linear and quadratic models. Results reveal the superior performance of the proposed copula-based technique.
Resumo:
Se estableció un ensayo permanente de campo en postrera de 1987 para estudiar a largo plazo los efectos de rotación de cultivos y control de malezas sobre la dinámica de las malezas y su potencial de semillas en el suelo. Se utilizó un diseño de bloques completos al azar con arreglo en parcelas divididas, siendo el factor A:Rotación (sorgo sorgo; maíz-sorgo; maíz-soya; pepino-soya; pepino-sorgo)y el factor B:Control de malezas (control químico; control por el período crítico y el control limpia periódica). Después de tres años, en postrera de 1990, se determinó el nivel y la composición del banco de semillas de las malezas por rotación, control de malezas y especies de malezas mediante el método de cultivación para luego compararlo con la abundancia actual. Los resultados demuestran cambios drásticos del enmalezamiento después de solo 3 años de realizar el ensayo. El rango del enmalezamiento actual varió de 88 hasta 440 índ/m 2, mientras en el enmalezamiento potencial se determinó 3125 hasta 12,969 semillas /m2, obteniéndose tasas de emergencia de semillas de 1.99% a 10.42 %. El mayor enmalezamiento (actual y potencial) lo mostró la rotación pepino-soya con 330 ind/m2 y 6,771 sem/m2 respectivamente, y el control químico con 227 ind/m2 y 6062 sem/m2, debido a la predominancia de la especie Rottboelia cochinchinensis (Lour) Clayton, obteniendo ésta de 25 a 405 ind/m2. El menor enmalezamiento actual se registró en la rotación maíz-sorgo con 101 ind/m2, y el potencial en la rotación maíz- soya con 4010 sem/m2 .y el control limpia periódica presentó el menor enmalezamiento (actual y potencial) con 109 ind/m2 y 3531 ind/m2 respectivamente.
Resumo:
El presente trabajo es el resultado de un diagnóstico realizado en el municipio de Nueva Guinea, Zelaya Central, en la época de apante (1994-1995). El estudio fue dirigido a los factores biológicos que afectan la producción de frijol común (Phaseolus vulgaris L.) principalmente plagas, enfermedades y malezas. El estudio abarcó nueve colonias y 14 productores del municipio de Nueva Guinea. Los resultados muestran que el principal problema de plagas es la babosa Vaginulus plebeius Fisher. Las medias de control contra las plagas están enfocadas al manejo de la especie en mención. Referente a las malezas se observó predominancia de malezas de hoja fina tales como retumbo Rottboellia cochinchinenesís (Lour) Clayton, zacate guinea Panicum maximun Jacq., retana Ischaemun ciliari Salisb y el zacate gallina Cynodon dactylon (L.) Pers. Las malezas de hoja ancha se presentaron en menor proporción, sobresaliendo la especie batatilla o campanila lpomoea tilliaceae (Wild) Choisy. La diversidad de las malezas es mayor en labranza convencional. la menor cantidad de especies de malezas se encontró en la labranza mínima. Las enfermedades de mayor relevancia son la antracnosis Colletotríchum lindemutianum ( Sacc & Magnus ) BCMV (virus del mosaíco común del frijol) y bacteriosis común del frijol Xanthomonas campestris pv phaseoli ( Smith ) Dye. La escasa precipitación durante el período del estudio probablemente influyó en que las enfermedades fungosas no se manifestaran en los lotes muestreados. El análisis económico muestra que la producción de frijol común en Nueva Guinea no fue rentable en el ciclo estudiado.
Resumo:
CONTENTS: Efforts of a farmer in fish seed production for self-employment, by Ras Behari Baraik and Ashish Kumar. Remembering: the missing capacity, by Terrence Clayton. Measuring the process, by Nick Innes-Taylor. Women’s fish farmers group in Nawalparasi, Nepal, by S.K. Pradhan. Periphyton-based aquaculture: a sustainable technology for resource-poor farmers, by M.E. Azim, M.A. Wahab, M.C.J. Verdegem, A.A. van Dam and M.C.M. Beveridge. Unlocking information on the Internet: STREAM media monitoring and issue tracking, by Paul Bulcock (PDF has 16 pages.)
Distribution and Density of Vegetative Hydrilla Propagules in the Sediments of Two New Zealand Lakes
Resumo:
The distribution and density of hydrilla (Hydrilla verticillata (L.f.)Royle) turions and tubers in two New Zealand lakes were assessed by sampling cores of sediment from Lakes Tutira and Waikapiro each year from 1994 to 1997. Turion and tuber density differed with water depth, with maximum numbers of tubers and turions found in the 1-2 m and 1.5-4m water depth ranges respectively. A high turion to tuber ratio was observed, with turions accounting for over 80% of propagules. The relatively low numbers of turions and tubers compared with other reports, and the distribution of most tubers within the shallow water is likely to be associated with black swan grazing (Cygnus atratus Latham), with maintains a canopy of hydrilla consistently 1 m below the water surface.
Resumo:
During the late 1980s to early 1990s a range of aquatic habitats in the central North Island of New Zealand were invaded by the filamentous green alga, water net Hydrodictyon reticulatum (Linn. Lagerheim). The alga caused significant economic and recreational impacts at major sites of infestation, but it was also associated with enhanced invertebrate numbers and was the likely cause of an improvement in the trout fishery. The causes of prolific growth of water net and the range of control options pursued are reviewed. The possible causes of its sudden decline in 1995 are considered, including physical factors, increase in grazer pressure, disease, and loss of genetic vigour.
Resumo:
The governing council of Naca has resolved to effect a shift in emphasis from aquaculture development to aquaculture for development. This will require engaging partners from a broad spectrum of government and development agencies, the nature of the information that will need to be gathered and the strategies used for disseminating information and initiating action. The vehicle for operationalising this shift is STREAM - Support to Regional Aquatic Resources Management. This report outlines the nature of the STREAM network, its relationship to NACA's vision, mission, objectives and operating principles, and how STREAM differs from previous NACA's networks. Because STREAM is different, a theoretical basis for network communication is presented along with an outline of the preliminary steps in getting the network up and running. (Pdf contains 33 pages).
Resumo:
Abstract to Part I
The inverse problem of seismic wave attenuation is solved by an iterative back-projection method. The seismic wave quality factor, Q, can be estimated approximately by inverting the S-to-P amplitude ratios. Effects of various uncertain ties in the method are tested and the attenuation tomography is shown to be useful in solving for the spatial variations in attenuation structure and in estimating the effective seismic quality factor of attenuating anomalies.
Back-projection attenuation tomography is applied to two cases in southern California: Imperial Valley and the Coso-Indian Wells region. In the Coso-Indian Wells region, a highly attenuating body (S-wave quality factor (Q_β ≈ 30) coincides with a slow P-wave anomaly mapped by Walck and Clayton (1987). This coincidence suggests the presence of a magmatic or hydrothermal body 3 to 5 km deep in the Indian Wells region. In the Imperial Valley, slow P-wave travel-time anomalies and highly attenuating S-wave anomalies were found in the Brawley seismic zone at a depth of 8 to 12 km. The effective S-wave quality factor is very low (Q_β ≈ 20) and the P-wave velocity is 10% slower than the surrounding areas. These results suggest either magmatic or hydrothermal intrusions, or fractures at depth, possibly related to active shear in the Brawley seismic zone.
No-block inversion is a generalized tomographic method utilizing the continuous form of an inverse problem. The inverse problem of attenuation can be posed in a continuous form , and the no-block inversion technique is applied to the same data set used in the back-projection tomography. A relatively small data set with little redundancy enables us to apply both techniques to a similar degree of resolution. The results obtained by the two methods are very similar. By applying the two methods to the same data set, formal errors and resolution can be directly computed for the final model, and the objectivity of the final result can be enhanced.
Both methods of attenuation tomography are applied to a data set of local earthquakes in Kilauea, Hawaii, to solve for the attenuation structure under Kilauea and the East Rift Zone. The shallow Kilauea magma chamber, East Rift Zone and the Mauna Loa magma chamber are delineated as attenuating anomalies. Detailed inversion reveals shallow secondary magma reservoirs at Mauna Ulu and Puu Oo, the present sites of volcanic eruptions. The Hilina Fault zone is highly attenuating, dominating the attenuating anomalies at shallow depths. The magma conduit system along the summit and the East Rift Zone of Kilauea shows up as a continuous supply channel extending down to a depth of approximately 6 km. The Southwest Rift Zone, on the other hand, is not delineated by attenuating anomalies, except at a depth of 8-12 km, where an attenuating anomaly is imaged west of Puu Kou. The Ylauna Loa chamber is seated at a deeper level (about 6-10 km) than the Kilauea magma chamber. Resolution in the Mauna Loa area is not as good as in the Kilauea area, and there is a trade-off between the depth extent of the magma chamber imaged under Mauna Loa and the error that is due to poor ray coverage. Kilauea magma chamber, on the other hand, is well resolved, according to a resolution test done at the location of the magma chamber.
Abstract to Part II
Long period seismograms recorded at Pasadena of earthquakes occurring along a profile to Imperial Valley are studied in terms of source phenomena (e.g., source mechanisms and depths) versus path effects. Some of the events have known source parameters, determined by teleseismic or near-field studies, and are used as master events in a forward modeling exercise to derive the Green's functions (SH displacements at Pasadena that are due to a pure strike-slip or dip-slip mechanism) that describe the propagation effects along the profile. Both timing and waveforms of records are matched by synthetics calculated from 2-dimensional velocity models. The best 2-dimensional section begins at Imperial Valley with a thin crust containing the basin structure and thickens towards Pasadena. The detailed nature of the transition zone at the base of the crust controls the early arriving shorter periods (strong motions), while the edge of the basin controls the scattered longer period surface waves. From the waveform characteristics alone, shallow events in the basin are easily distinguished from deep events, and the amount of strike-slip versus dip-slip motion is also easily determined. Those events rupturing the sediments, such as the 1979 Imperial Valley earthquake, can be recognized easily by a late-arriving scattered Love wave that has been delayed by the very slow path across the shallow valley structure.