46 resultados para L33 - Comparison of Public and Private Enterprises
em CentAUR: Central Archive University of Reading - UK
Resumo:
Autism spectrum conditions (ASC) affect more males than females in the general population. However, within ASC it is unclear if there are phenotypic sex differences. Testing for similarities and differences between the sexes is important not only for clinical assessment but also has implications for theories of typical sex differences and of autism. Using cognitive and behavioral measures, we investigated similarities and differences between the sexes in age- and IQ-matched adults with ASC (high-functioning autism or Asperger syndrome). Of the 83 (45 males and 38 females) participants, 62 (33 males and 29 females) met Autism Diagnostic Interview-Revised (ADI-R) cut-off criteria for autism in childhood and were included in all subsequent analyses. The severity of childhood core autism symptoms did not differ between the sexes. Males and females also did not differ in self-reported empathy, systemizing, anxiety, depression, and obsessive-compulsive traits/symptoms or mentalizing performance. However, adult females with ASC showed more lifetime sensory symptoms (p = 0.036), fewer current socio-communication difficulties (p = 0.001), and more self-reported autistic traits (p = 0.012) than males. In addition, females with ASC who also had developmental language delay had lower current performance IQ than those without developmental language delay (p<0.001), a pattern not seen in males. The absence of typical sex differences in empathizing-systemizing profiles within the autism spectrum confirms a prediction from the extreme male brain theory. Behavioral sex differences within ASC may also reflect different developmental mechanisms between males and females with ASC. We discuss the importance of the superficially better socio-communication ability in adult females with ASC in terms of why females with ASC may more often go under-recognized, and receive their diagnosis later, than males.
Resumo:
The behavior of the Asian summer monsoon is documented and compared using the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis (ERA) and the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) Reanalysis. In terms of seasonal mean climatologies the results suggest that, in several respects, the ERA is superior to the NCEP-NCAR Reanalysis. The overall better simulation of the precipitation and hence the diabatic heating field over the monsoon domain in ERA means that the analyzed circulation is probably nearer reality. In terms of interannual variability, inconsistencies in the definition of weak and strong monsoon years based on typical monsoon indices such as All-India Rainfall (AIR) anomalies and the large-scale wind shear based dynamical monsoon index (DMI) still exist. Two dominant modes of interannual variability have been identified that together explain nearly 50% of the variance. Individually, they have many features in common with the composite flow patterns associated with weak and strong monsoons, when defined in terms of regional AIR anomalies and the large-scale DMI. The reanalyses also show a common dominant mode of intraseasonal variability that describes the latitudinal displacement of the tropical convergence zone from its oceanic-to-continental regime and essentially captures the low-frequency active/break cycles of the monsoon. The relationship between interannual and intraseasonal variability has been investigated by considering the probability density function (PDF) of the principal component of the dominant intraseasonal mode. Based on the DMI, there is an indication that in years with a weaker monsoon circulation, the PDF is skewed toward negative values (i,e., break conditions). Similarly, the PDFs for El Nino and La Nina years suggest that El Nino predisposes the system to more break spells, although the sample size may limit the statistical significance of the results.
Resumo:
Phytoextraction has been proposed as an alternative remediation technology for soils polluted with heavy metals or radionuclides, but is generally conceived as too slow working. Enhancing the accumulation of trace pollutants in harvestable plant tissues is a prerequisite for the technology to be practically applicable. The chelating aminopolycarboxylic acid, ethylene diamine tetraacetate (EDTA), has been found to enhance shoot accumulation of heavy metals. However, the use of EDTA in phytoextraction may not be suitable due to its high environmental persistence, which may lead to groundwater contamination. This paper aims to assess whether ethylene diamine disuccinate (EDDS), a biodegradable chelator, can be used for enhanced phytoextraction purposes. A laboratory experiment was conducted to examine mobilisation of Cd, Cu, Cr, Ni, Pb and Zn into the soil solution upon application of EDTA or EDDS. The longevity of the induced mobilisation was monitored for a period of 40 days after application. Estimated effect half lives ranged between 3.8 and 7.5 days for EDDS, depending on the applied dose. The minimum observed effect half life of EDTA was 36 days, while for the highest applied dose no decrease was observed throughout the 40 day period of the mobilisation experiment. Performance of EDTA and EDDS for phytoextraction was evaluated by application to Helianthus annuus. Two other potential chelators, known for their biodegradability in comparison to EDTA, were tested in the plant experiment: nitrilo acetic acid (NTA) and citric acid. Uptake of heavy metals was higher in EDDS-treated pots than in EDTA-treated pots. The effects were still considered insufficiently high to consider efficient remediation. This may be partly due to the choice of timing for application of the soil amendment. Fixing the time of application at an earlier point before harvest may yield better results. NTA and citric acid induced no significant effects on heavy metal uptake. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Estimating the magnitude of Agulhas leakage, the volume flux of water from the Indian to the Atlantic Ocean, is difficult because of the presence of other circulation systems in the Agulhas region. Indian Ocean water in the Atlantic Ocean is vigorously mixed and diluted in the Cape Basin. Eulerian integration methods, where the velocity field perpendicular to a section is integrated to yield a flux, have to be calibrated so that only the flux by Agulhas leakage is sampled. Two Eulerian methods for estimating the magnitude of Agulhas leakage are tested within a high-resolution two-way nested model with the goal to devise a mooring-based measurement strategy. At the GoodHope line, a section halfway through the Cape Basin, the integrated velocity perpendicular to that line is compared to the magnitude of Agulhas leakage as determined from the transport carried by numerical Lagrangian floats. In the first method, integration is limited to the flux of water warmer and more saline than specific threshold values. These threshold values are determined by maximizing the correlation with the float-determined time series. By using the threshold values, approximately half of the leakage can directly be measured. The total amount of Agulhas leakage can be estimated using a linear regression, within a 90% confidence band of 12 Sv. In the second method, a subregion of the GoodHope line is sought so that integration over that subregion yields an Eulerian flux as close to the float-determined leakage as possible. It appears that when integration is limited within the model to the upper 300 m of the water column within 900 km of the African coast the time series have the smallest root-mean-square difference. This method yields a root-mean-square error of only 5.2 Sv but the 90% confidence band of the estimate is 20 Sv. It is concluded that the optimum thermohaline threshold method leads to more accurate estimates even though the directly measured transport is a factor of two lower than the actual magnitude of Agulhas leakage in this model.
Resumo:
Transient and continuous recombinant protein expression by HEK cells was evaluated in a perfused monolithic bioreactor. Highly porous synthetic cryogel scaffolds (10ml bed volume) were characterised by scanning electron microscopy and tested as cell substrates. Efficient seeding was achieved (94% inoculum retained, with 91-95% viability). Metabolite monitoring indicated continuous cell growth, and endpoint cell density was estimated by genomic DNA quantification to be 5.2x108, 1.1x109 and 3.5x1010 at day 10, 14 and 18. Culture of stably transfected cells allowed continuous production of the Drosophila cytokine Spätzle by the bioreactor at the same rate as in monolayer culture (total 1.2 mg at d18) and this protein was active. In transient transfection experiments more protein was produced per cell compared with monolayer culture. Confocal microscopy confirmed homogenous GFP expression after transient transfection within the bioreactor. Monolithic bioreactors are thus shown to be a flexible and powerful tool for manufacturing recombinant proteins.
Resumo:
G3B3 and G2MP2 calculations using Gaussian 03 have been carried out to investigate the protonation preferences for phenylboronic acid. All nine heavy atoms have been protonated in turn. With both methodologies, the two lowest protonation energies are obtained with the proton located either at the ipso carbon atom or at a hydroxyl oxygen atom. Within the G3B3 formalism, the lowest-energy configuration by 4.3 kcal . mol(-1) is found when the proton is located at the ipso carbon, rather than at the electronegative oxygen atom. In the resulting structure, the phenyl ring has lost a significant amount of aromaticity. By contrast, calculations with G2MP2 show that protonation at the hydroxyl oxygen atom is favored by 7.7 kcal . mol(-1). Calculations using the polarizable continuum model (PCM) solvent method also give preference to protonation at the oxygen atom when water is used as the solvent. The preference for protonation at the ipso carbon found by the more accurate G3B3 method is unexpected and its implications in Suzuki coupling are discussed. (C) 2006 Wiley Periodicals, Inc.
Resumo:
This paper compares and contrasts, for the first time, one- and two-component gelation systems that are direct structural analogues and draws conclusions about the molecular recognition pathways that underpin fibrillar self-assembly. The new one-component systems comprise L-lysine-based dendritic headgroups covalently connected to an aliphatic diamine spacer chain via an amide bond, One-component gelators with different generations of headgroup (from first to third generation) and different length spacer chains are reported. The self-assembly of these dendrimers in toluene was elucidated using thermal measurements, circular dichroism (CD) and NMR spectroscopies, scanning electron microscopy (SEM), and small-angle X-ray scattering (SAXS). The observations are compared with previous results for the analogous two-component gelation system in which the dendritic headgroups are bound to the aliphatic spacer chain noncovalently via acid-amine interactions. The one-component system is inherently a more effective gelator, partly as a consequence of the additional covalent amide groups that provide a new hydrogen bonding molecular recognition pathway, whereas the two-component analogue relies solely on intermolecular hydrogen bond interactions between the chiral dendritic headgroups. Furthermore, because these amide groups are important in the assembly process for the one-component system, the chiral information preset in the dendritic headgroups is not always transcribed into the nanoscale assembly, whereas for the two-component system, fiber formation is always accompanied by chiral ordering because the molecular recognition pathway is completely dependent on hydrogen bond interactions between well-organized chiral dendritic headgroups.
Resumo:
Because of the importance and potential usefulness of construction market statistics to firms and government, consistency between different sources of data is examined with a view to building a predictive model of construction output using construction data alone. However, a comparison of Department of Trade and Industry (DTI) and Office for National Statistics (ONS) series shows that the correlation coefcient (used as a measure of consistency) of the DTI output and DTI orders data and the correlation coefficient of the DTI output and ONS output data are low. It is not possible to derive a predictive model of DTI output based on DTI orders data alone. The question arises whether or not an alternative independent source of data may be used to predict DTI output data. Independent data produced by Emap Glenigan (EG), based on planning applications, potentially offers such a source of information. The EG data records the value of planning applications and their planned start and finish dates. However, as this data is ex ante and is not correlated with DTI output it is not possible to use this data to describe the volume of actual construction output. Nor is it possible to use the EG planning data to predict DTI construc-tion orders data. Further consideration of the issues raised reveal that it is not practically possible to develop a consistent predictive model of construction output using construction statistics gathered at different stages in the development process.
Resumo:
The rheological properties of dough and gluten are important for end-use quality of flour but there is a lack of knowledge of the relationships between fundamental and empirical tests and how they relate to flour composition and gluten quality. Dough and gluten from six breadmaking wheat qualities were subjected to a range of rheological tests. Fundamental (small-deformation) rheological characterizations (dynamic oscillatory shear and creep recovery) were performed on gluten to avoid the nonlinear influence of the starch component, whereas large deformation tests were conducted on both dough and gluten. A number of variables from the various curves were considered and subjected to a principal component analysis (PCA) to get an overview of relationships between the various variables. The first component represented variability in protein quality, associated with elasticity and tenacity in large deformation (large positive loadings for resistance to extension and initial slope of dough and gluten extension curves recorded by the SMS/Kieffer dough and gluten extensibility rig, and the tenacity and strain hardening index of dough measured by the Dobraszczyk/Roberts dough inflation system), the elastic character of the hydrated gluten proteins (large positive loading for elastic modulus [G'], large negative loadings for tan delta and steady state compliance [J(e)(0)]), the presence of high molecular weight glutenin subunits (HMW-GS) 5+10 vs. 2+12, and a size distribution of glutenin polymers shifted toward the high-end range. The second principal component was associated with flour protein content. Certain rheological data were influenced by protein content in addition to protein quality (area under dough extension curves and dough inflation curves [W]). The approach made it possible to bridge the gap between fundamental rheological properties, empirical measurements of physical properties, protein composition, and size distribution. The interpretation of this study gave indications of the molecular basis for differences in breadmaking performance.
Resumo:
Fluorophos and colourimetric procedures for alkaline phosphatase (ALP) testing were compared using milk with raw milk additions, purified bovine ALP additions and heat treatments. Repeatability was between 0.9% and 10.1% for Fluorophos, 3.5% and 46.1% for the Aschaffenburg and Mullen (A&M) procedure and 4.4% and 8.8% for the Scharer rapid test. Linearity (R-2) using raw milk addition was 0.96 between Fluorophos and the Scharer procedure. Between the Fluorophos and the A&M procedures, R-2 values were 0.98, 0.99 and 0.98 for raw milk additions, bovine ALP additions and heat treatments respectively. Fluorophos showed greater sensitivity and was both faster and simpler to perform.
Comparison of Temporal and Standard Independent Component Analysis (ICA) Algorithms for EEG Analysis
Resumo:
A quasi-optical deembedding technique for characterizing waveguides is demonstrated using wide-band time-resolved terahertz spectroscopy. A transfer function representation is adopted for the description of the signal in the input and output port of the waveguides. The time-domain responses were discretized and the waveguide transfer function was obtained through a parametric approach in the z-domain after describing the system with an AutoRegressive with eXogenous input (ARX), as well as with a state-space model. Prior to the identification procedure, filtering was performed in the wavelet domain to minimize both signal distortion, as well as the noise propagating in the ARX and subspace models. The optimal filtering procedure used in the wavelet domain for the recorded time-domain signatures is described in detail. The effect of filtering prior to the identification procedures is elucidated with the aid of pole-zero diagrams. Models derived from measurements of terahertz transients in a precision WR-8 waveguide adjustable short are presented.