870 resultados para Modernism and the Scientific Method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regulatory agencies such as Europol, Frontex, Eurojust, CEPOL as well as bodies such as OLAF, have over the past decade become increasingly active within the institutional architecture constituting the EU’s Area of Freedom, Security and Justice and are now placed at the forefront of implementing and developing the EU’s internal security model. A prominent feature of agency activity is the large-scale proliferation of ‘knowledge’ on security threats via the production of policy tools such as threat assessments, risk analyses, periodic and situation reports. These instruments now play a critical role in providing the evidence-base that supports EU policymaking, with agency-generated ‘knowledge’ feeding political priority setting and decision-making within the EU’s new Internal Security Strategy (ISS). This paper examines the nature and purpose of knowledge generated by EU Home Affairs agencies. It asks where does this knowledge originate? How does it measure against criteria of objectivity, scientific rigour, reliability and accuracy? And how is it processed in order to frame threats, justify actions and set priorities under the ISS?

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new method for assessing forecast skill and predictability that involves the identification and tracking of extratropical cyclones has been developed and implemented to obtain detailed information about the prediction of cyclones that cannot be obtained from more conventional analysis methodologies. The cyclones were identified and tracked along the forecast trajectories, and statistics were generated to determine the rate at which the position and intensity of the forecasted storms diverge from the analyzed tracks as a function of forecast lead time. The results show a higher level of skill in predicting the position of extratropical cyclones than the intensity. They also show that there is potential to improve the skill in predicting the position by 1 - 1.5 days and the intensity by 2 - 3 days, via improvements to the forecast model. Further analysis shows that forecasted storms move at a slower speed than analyzed storms on average and that there is a larger error in the predicted amplitudes of intense storms than the weaker storms. The results also show that some storms can be predicted up to 3 days before they are identified as an 850-hPa vorticity center in the analyses. In general, the results show a higher level of skill in the Northern Hemisphere (NH) than the Southern Hemisphere (SH); however, the rapid growth of NH winter storms is not very well predicted. The impact that observations of different types have on the prediction of the extratropical cyclones has also been explored, using forecasts integrated from analyses that were constructed from reduced observing systems. A terrestrial, satellite, and surface-based system were investigated and the results showed that the predictive skill of the terrestrial system was superior to the satellite system in the NH. Further analysis showed that the satellite system was not very good at predicting the growth of the storms. In the SH the terrestrial system has significantly less skill than the satellite system, highlighting the dominance of satellite observations in this hemisphere. The surface system has very poor predictive skill in both hemispheres.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The life-cycle of shallow frontal waves and the impact of deformation strain on their development is investigated using the idealised version of the Met Office non-hydrostatic Unified Model which includes the same physics and dynamics as the operational forecast model. Frontal wave development occurs in two stages; first, a deformation strain is applied to a front and a positive potential vorticity (PV) strip forms, generated by latent heat release in the frontal updraft; second, as the deformation strain is reduced the PV strip breaks up into individual anomalies. The circulations associated with the PV anomalies cause shallow frontal waves to form. The structure of the simulated frontal waves is consistent with the conceptual model of a frontal cyclone. Deeper frontal waves are simulated if the stability of the atmosphere is reduced. Deformation strain rates of different strengths are applied to the PV strip to determine whether a deformation strain threshold exists above which frontal wave development is suppressed. An objective method of frontal wave activity is defined and frontal wave development was found to be suppressed by deformation strain rates $\ge 0.4\times10^{-5}\mbox{s}^{-1}$. This value compares well with observed deformation strain rate thresholds and the analytical solution for the minimum deformation strain rate needed to suppress barotropic frontal wave development. The deformation strain rate threshold is dependent on the strength of the PV strip with strong PV strips able to overcome stronger deformation strain rates (leading to frontal wave development) than weaker PV strips.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The experimental variogram computed in the usual way by the method of moments and the Haar wavelet transform are similar in that they filter data and yield informative summaries that may be interpreted. The variogram filters out constant values; wavelets can filter variation at several spatial scales and thereby provide a richer repertoire for analysis and demand no assumptions other than that of finite variance. This paper compares the two functions, identifying that part of the Haar wavelet transform that gives it its advantages. It goes on to show that the generalized variogram of order k=1, 2, and 3 filters linear, quadratic, and cubic polynomials from the data, respectively, which correspond with more complex wavelets in Daubechies's family. The additional filter coefficients of the latter can reveal features of the data that are not evident in its usual form. Three examples in which data recorded at regular intervals on transects are analyzed illustrate the extended form of the variogram. The apparent periodicity of gilgais in Australia seems to be accentuated as filter coefficients are added, but otherwise the analysis provides no new insight. Analysis of hyerpsectral data with a strong linear trend showed that the wavelet-based variograms filtered it out. Adding filter coefficients in the analysis of the topsoil across the Jurassic scarplands of England changed the upper bound of the variogram; it then resembled the within-class variogram computed by the method of moments. To elucidate these results, we simulated several series of data to represent a random process with values fluctuating about a mean, data with long-range linear trend, data with local trend, and data with stepped transitions. The results suggest that the wavelet variogram can filter out the effects of long-range trend, but not local trend, and of transitions from one class to another, as across boundaries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is increasing concern about soil enrichment with K+ and subsequent potential losses following long-term application of poor quality water to agricultural land. Different models are increasingly being used for predicting or analyzing water flow and chemical transport in soils and groundwater. The convective-dispersive equation (CDE) and the convective log-normal transfer function (CLT) models were fitted to the potassium (K+) leaching data. The CDE and CLT models produced equivalent goodness of fit. Simulated breakthrough curves for a range of CaCl2 concentration based on parameters of 15 mmol l(-1) CaCl2 were characterised by an early peak position associated with higher K+ concentration as the CaCl2 concentration used in leaching experiments decreased. In another method, the parameters estimated from 15 mmol l(-1) CaCl2 solution were used for all other CaCl2 concentrations, and the best value of retardation factor (R) was optimised for each data set. A better prediction was found. With decreasing CaCl2 concentration the value of R is required to be more than that measured (except for 10 mmol l(-1) CaCl2), if the estimated parameters of 15 mmol l(-1) CaCl2 are used. The two models suffer from the fact that they need to be calibrated against a data set, and some of their parameters are not measurable and cannot be determined independently.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article examines two genres of text which were extremely popular in the late-medieval and early modern periods, and it pays particular attention to women users. The printed almanacs of sixteenth-century England were enormously influential; yet their contents are so formulaic and repetitive as to appear almost empty of valuable information. Their most striking feature is their astrological guidance for the reader, and this has led to them being considered 'merely' the repository of popular superstition. Only in the last decade have themes of gender and medicine been given serious consideration in relation to almanacs; but this work has focused on the seventeenth century. This chapter centres on a detailed analysis of sixteenth-century English almanacs, and the various kinds of scientific and household guidance they offered to women readers. Both compilers and users needed to chart a safe course through the religious and scientific battles of the time; and the complexities involved are demonstrated by considering the almanacs in relation to competing sources of guidance. These latter are Books of Hours and 'scientific' works such as medical calendars compiled by Oxford scholars in the late middle ages. A key feature of this chapter is that it gives practical interpretations of this complex information, for the guidance of modern readers unfamiliar with astrology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of high throughput techniques ('chip' technology) for measurement of gene expression and gene polymorphisms (genomics), and techniques for measuring global protein expression (proteomics) and metabolite profile (metabolomics) are revolutionising life science research, including research in human nutrition. In particular, the ability to undertake large-scale genotyping and to identify gene polymorphisms that determine risk of chronic disease (candidate genes) could enable definition of an individual's risk at an early age. However, the search for candidate genes has proven to be more complex, and their identification more elusive, than previously thought. This is largely due to the fact that much of the variability in risk results from interactions between the genome and environmental exposures. Whilst the former is now very well defined via the Human Genome Project, the latter (e.g. diet, toxins, physical activity) are poorly characterised, resulting in inability to account for their confounding effects in most large-scale candidate gene studies. The polygenic nature of most chronic diseases offers further complexity, requiring very large studies to disentangle relatively weak impacts of large numbers of potential 'risk' genes. The efficacy of diet as a preventative strategy could also be considerably increased by better information concerning gene polymorphisms that determine variability in responsiveness to specific diet and nutrient changes. Much of the limited available data are based on retrospective genotyping using stored samples from previously conducted intervention trials. Prospective studies are now needed to provide data that can be used as the basis for provision of individualised dietary advice and development of food products that optimise disease prevention. Application of the new technologies in nutrition research offers considerable potential for development of new knowledge and could greatly advance the role of diet as a preventative disease strategy in the 21st century. Given the potential economic and social benefits offered, funding for research in this area needs greater recognition, and a stronger strategic focus, than is presently the case. Application of genomics in human health offers considerable ethical and societal as well as scientific challenges. Economic determinants of health care provision are more likely to resolve such issues than scientific developments or altruistic concerns for human health.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. The feeding rates of many predators and parasitoids exhibit type II functional responses, with a decelerating rate of increase to reach an asymptotic value as the density of their prey or hosts increases. Holling's disc equation describes such relationships and predicts that the asymptotic feeding rate at high prey densities is set by handling time, while the rate at which feeding rate increases with increased prey density is determined by searching efficiency. Searching efficiency and handling time are also parameters in other models which describe the functional response. Models which incorporate functional responses in order to make predictions of the effects of food shortage thus rely upon a clear understanding and accurate quantification of searching efficiency and handling time. 2. Blackbird Turdus merula exhibit a type II functional response and use pause-travel foraging, a foraging technique in which animals search for prey while stationary and then move to capture prey. Pause-travel foraging allows accurate direct measurement of feeding rate and both searching efficiency and handling time. We use Blackbirds as a model species to: (i) compare observed measures of both searching efficiency and handling time with those estimated by statistically fitting the disc equation to the observed functional response; and (ii) investigate alternative measures of searching efficiency derived by the established method where search area is assumed to be circular and a new method that we propose where it is not. 3. We find that the disc equation can adequately explain the functional response of blackbirds feeding on artificial prey. However, this depends critically upon how searching efficiency is measured. Two variations on the previous method of measuring search area (a component of searching efficiency) overestimated searching efficiency, and hence predicted feeding rates higher than those observed. Two variations of our alternative approach produced lower estimates of searching efficiency, closer to that estimated by fitting the disc equation, and hence more accurately predicted feeding rate. Our study shows the limitations of the previous method of measuring searching efficiency, and describes a new method for measuring searching efficiency more accurately.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional seemingly unrelated estimation of the almost ideal demand system is shown to lead to small sample bias and distortions in the size of a Wald test for symmetry and homogeneity when the data are co-integrated. A fully modified estimator is developed in an attempt to remedy these problems. It is shown that this estimator reduces the small sample bias but fails to eliminate the size distortion.. Bootstrapping is shown to be ineffective as a method of removing small sample bias in both the conventional and fully modified estimators. Bootstrapping is effective, however, as a method of removing. size distortion and performs equally well in this respect with both estimators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stable isotope labeling combined with MS is a powerful method for measuring relative protein abundances, for instance, by differential metabolic labeling of some or all amino acids with 14N and 15N in cell culture or hydroponic media. These and most other types of quantitative proteomics experiments using high-throughput technologies, such as LC-MS/MS, generate large amounts of raw MS data. This data needs to be processed efficiently and automatically, from the mass spectrometer to statistically evaluated protein identifications and abundance ratios. This paper describes in detail an approach to the automated analysis of uniformly 14N/15N-labeled proteins using MASCOT peptide identification in conjunction with the trans-proteomic pipeline (TPP) and a few scripts to integrate the analysis workflow. Two large proteomic datasets from uniformly labeled Arabidopsis thaliana were used to illustrate the analysis pipeline. The pipeline can be fully automated and uses only common or freely available software.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Certain forkhead (FOX) transcription factors have been shown to play an intrinsic role in controlling cell cycle progression. In particular, the FoxO subclass has been shown to regulate cell cycle entry and exit, whereas the expression and activity of FoxM1 is important for the correct coupling of DNA synthesis to mitosis. In this chapter, I describe a method for measuring FoxO and FoxM1 transcription factor DNA binding in nuclear extracts from mammalian cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kinetic studies on the AR (aldose reductase) protein have shown that it does not behave as a classical enzyme in relation to ring aldose sugars. As with non-enzymatic glycation reactions, there is probably a free radical element involved derived from monosaccharide autoxidation. in the case of AR, there is free radical oxidation of NADPH by autoxidizing monosaccharides, which is enhanced in the presence of the NADPH-binding protein. Thus any assay for AR based on the oxidation of NADPH in the presence of autoxidizing monosaccharides is invalid, and tissue AR measurements based on this method are also invalid, and should be reassessed. AR exhibits broad specificity for both hydrophilic and hydrophobic aldehydes that suggests that the protein may be involved in detoxification. The last thing we would want to do is to inhibit it. ARIs (AR inhibitors) have a number of actions in the cell which are not specific, and which do not involve them binding to AR. These include peroxy-radical scavenging and effects of metal ion chelation. The AR/ARI story emphasizes the importance of correct experimental design in all biocatalytic experiments. Developing the use of Bayesian utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has led to the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-m and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimizes the error in the parameters estimated, and is suitable for simple or complex steady-state models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Acquiring details of kinetic parameters of enzymes is crucial to biochemical understanding, drug development, and clinical diagnosis in ocular diseases. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. Methods: We have developed Bayesian utility functions to minimise kinetic parameter variance involving differentiation of model expressions and matrix inversion. These have been applied to the simple kinetics of the enzymes in the glyoxalase pathway (of importance in posttranslational modification of proteins in cataract), and the complex kinetics of lens aldehyde dehydrogenase (also of relevance to cataract). Results: Our successful application of Bayesian statistics has allowed us to identify a set of rules for designing optimum kinetic experiments iteratively. Most importantly, the distribution of points in the range is critical; it is not simply a matter of even or multiple increases. At least 60 % must be below the KM (or plural if more than one dissociation constant) and 40% above. This choice halves the variance found using a simple even spread across the range.With both the glyoxalase system and lens aldehyde dehydrogenase we have significantly improved the variance of kinetic parameter estimation while reducing the number and costs of experiments. Conclusions: We have developed an optimal and iterative method for selecting features of design such as substrate range, number of measurements and choice of intermediate points. Our novel approach minimises parameter error and costs, and maximises experimental efficiency. It is applicable to many areas of ocular drug design, including receptor-ligand binding and immunoglobulin binding, and should be an important tool in ocular drug discovery.