975 resultados para Intervals of singularity
Resumo:
The broad objectives of the work were to develop standard methods for the routine biological surveillance of river water quality, using the non-planktonic algae. Studies on sampling methodology indicated that natural substrata should be sampled directly wherever possible, but for routine purposes, only a semi-quantitative approach was found to be feasible. Artificial substrata were considered to be useful for sample collection in deeper waters, and of three different types tested, Polythene strips were selected for further investigation essentially on grounds of practicality. These were tested in the deeper reaches of a wide range of river types and water qualities: 26 pool sites in 14 different rivers were studied over a period of 9 months. At each site, the assemblages developing on 3 strips following a 4, or less commonly, an 3 week immersion period were analysed quantitatively. Where possible, the natural substrata were also sampled semi-quantitatively at each site, and at a nearby riffle. The results of this survey were very fragmentary: many strips failed to yield useful data, and the results were often difficult to interpret, and of limited value for water quality surveillance purposes. In one river, the Churnet, the natural substrata at 14 riffle sites were sampled semi-quantitatively on 14 occasions at intervals of 4 weeks. In this survey, the results were more readily interpreted in relation to water quality, and no special data processing was found to be necessary or helpful. Further studies carried out on the filamentous green alga Cladophora showed that this alga may have some value as a bioaccumulation indicator for metals, and as a bioassay organism for the assessment of the algal growth promoting potential of natural river waters.
Resumo:
This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.
Resumo:
In efficiency studies using the stochastic frontier approach, the main focus is to explain inefficiency in terms of some exogenous variables and computation of marginal effects of each of these determinants. Although inefficiency is estimated by its mean conditional on the composed error term (the Jondrow et al., 1982 estimator), the marginal effects are computed from the unconditional mean of inefficiency (Wang, 2002). In this paper we derive the marginal effects based on the Jondrow et al. estimator and use the bootstrap method to compute confidence intervals of the marginal effects.
Resumo:
This paper surveys research in the field of data mining, which is related to discovering the dependencies between attributes in databases. We consider a number of approaches to finding the distribution intervals of association rules, to discovering branching dependencies between a given set of attributes and a given attribute in a database relation, to finding fractional dependencies between a given set of attributes and a given attribute in a database relation, and to collaborative filtering.
Resumo:
Experimental researches of one of the eldest bells («Mazepa»-see Appendix1) in Ukraine, are considered in the article. The spectra and spectra- time analysis of bell ringing is embodied, main frequencies of oscillation and musical intervals of sounding are determined. Comparative description of bell sounding with the known bells of Russia and Bulgaria is given.
Resumo:
2000 Mathematics Subject Classification: 62H15, 62P10.
Resumo:
MSC 2010: 46F30, 46F10
Resumo:
Motivation: In any macromolecular polyprotic system - for example protein, DNA or RNA - the isoelectric point - commonly referred to as the pI - can be defined as the point of singularity in a titration curve, corresponding to the solution pH value at which the net overall surface charge - and thus the electrophoretic mobility - of the ampholyte sums to zero. Different modern analytical biochemistry and proteomics methods depend on the isoelectric point as a principal feature for protein and peptide characterization. Protein separation by isoelectric point is a critical part of 2-D gel electrophoresis, a key precursor of proteomics, where discrete spots can be digested in-gel, and proteins subsequently identified by analytical mass spectrometry. Peptide fractionation according to their pI is also widely used in current proteomics sample preparation procedures previous to the LC-MS/MS analysis. Therefore accurate theoretical prediction of pI would expedite such analysis. While such pI calculation is widely used, it remains largely untested, motivating our efforts to benchmark pI prediction methods. Results: Using data from the database PIP-DB and one publically available dataset as our reference gold standard, we have undertaken the benchmarking of pI calculation methods. We find that methods vary in their accuracy and are highly sensitive to the choice of basis set. The machine-learning algorithms, especially the SVM-based algorithm, showed a superior performance when studying peptide mixtures. In general, learning-based pI prediction methods (such as Cofactor, SVM and Branca) require a large training dataset and their resulting performance will strongly depend of the quality of that data. In contrast with Iterative methods, machine-learning algorithms have the advantage of being able to add new features to improve the accuracy of prediction. Contact: yperez@ebi.ac.uk Availability and Implementation: The software and data are freely available at https://github.com/ypriverol/pIR. Supplementary information: Supplementary data are available at Bioinformatics online.
Resumo:
The nation's freeway systems are becoming increasingly congested. A major contribution to traffic congestion on freeways is due to traffic incidents. Traffic incidents are non-recurring events such as accidents or stranded vehicles that cause a temporary roadway capacity reduction, and they can account for as much as 60 percent of all traffic congestion on freeways. One major freeway incident management strategy involves diverting traffic to avoid incident locations by relaying timely information through Intelligent Transportation Systems (ITS) devices such as dynamic message signs or real-time traveler information systems. The decision to divert traffic depends foremost on the expected duration of an incident, which is difficult to predict. In addition, the duration of an incident is affected by many contributing factors. Determining and understanding these factors can help the process of identifying and developing better strategies to reduce incident durations and alleviate traffic congestion. A number of research studies have attempted to develop models to predict incident durations, yet with limited success. ^ This dissertation research attempts to improve on this previous effort by applying data mining techniques to a comprehensive incident database maintained by the District 4 ITS Office of the Florida Department of Transportation (FDOT). Two categories of incident duration prediction models were developed: "offline" models designed for use in the performance evaluation of incident management programs, and "online" models for real-time prediction of incident duration to aid in the decision making of traffic diversion in the event of an ongoing incident. Multiple data mining analysis techniques were applied and evaluated in the research. The multiple linear regression analysis and decision tree based method were applied to develop the offline models, and the rule-based method and a tree algorithm called M5P were used to develop the online models. ^ The results show that the models in general can achieve high prediction accuracy within acceptable time intervals of the actual durations. The research also identifies some new contributing factors that have not been examined in past studies. As part of the research effort, software code was developed to implement the models in the existing software system of District 4 FDOT for actual applications. ^
Resumo:
Crash reduction factors (CRFs) are used to estimate the potential number of traffic crashes expected to be prevented from investment in safety improvement projects. The method used to develop CRFs in Florida has been based on the commonly used before-and-after approach. This approach suffers from a widely recognized problem known as regression-to-the-mean (RTM). The Empirical Bayes (EB) method has been introduced as a means to addressing the RTM problem. This method requires the information from both the treatment and reference sites in order to predict the expected number of crashes had the safety improvement projects at the treatment sites not been implemented. The information from the reference sites is estimated from a safety performance function (SPF), which is a mathematical relationship that links crashes to traffic exposure. The objective of this dissertation was to develop the SPFs for different functional classes of the Florida State Highway System. Crash data from years 2001 through 2003 along with traffic and geometric data were used in the SPF model development. SPFs for both rural and urban roadway categories were developed. The modeling data used were based on one-mile segments that contain homogeneous traffic and geometric conditions within each segment. Segments involving intersections were excluded. The scatter plots of data show that the relationships between crashes and traffic exposure are nonlinear, that crashes increase with traffic exposure in an increasing rate. Four regression models, namely, Poisson (PRM), Negative Binomial (NBRM), zero-inflated Poisson (ZIP), and zero-inflated Negative Binomial (ZINB), were fitted to the one-mile segment records for individual roadway categories. The best model was selected for each category based on a combination of the Likelihood Ratio test, the Vuong statistical test, and the Akaike's Information Criterion (AIC). The NBRM model was found to be appropriate for only one category and the ZINB model was found to be more appropriate for six other categories. The overall results show that the Negative Binomial distribution model generally provides a better fit for the data than the Poisson distribution model. In addition, the ZINB model was found to give the best fit when the count data exhibit excess zeros and over-dispersion for most of the roadway categories. While model validation shows that most data points fall within the 95% prediction intervals of the models developed, the Pearson goodness-of-fit measure does not show statistical significance. This is expected as traffic volume is only one of the many factors contributing to the overall crash experience, and that the SPFs are to be applied in conjunction with Accident Modification Factors (AMFs) to further account for the safety impacts of major geometric features before arriving at the final crash prediction. However, with improved traffic and crash data quality, the crash prediction power of SPF models may be further improved.
Resumo:
Hearing of the news of the death of Diana, Princess of Wales, in a traffic accident, is taken as an analogue for being a percipient but uninvolved witness to a crime, or a witness to another person's sudden confession to some illegal act. This event (known in the literature as a “reception event”) has previously been hypothesized to cause one to form a special type of memory commonly known as a “flashbulb memory” (FB) (Brown and Kulik, 1977). FB's are hypothesized to be especially resilient against forgetting, highly detailed including peripheral details, clear, and inspiring great confidence in the individual for their accuracy. FB's are dependent for their formation upon surprise, emotional valence, and impact, or consequentiality to the witness of the initiating event. FB's are thought to be enhanced by frequent rehearsal. FB's are very important in the context of criminal investigation and litigation in that investigators and jurors usually place great store in witnesses, regardless of their actual accuracy, who claim to have a clear and complete recollection of an event, and who express this confidently. Therefore, the lives, or at least the freedom, of criminal defendants, and the fortunes of civil litigants hang on the testimony of witnesses professing to have FB's. ^ In this study, which includes a large and diverse sample (N = 305), participants were surveyed within 2–4 days after hearing of the fatal accident, and again at intervals of 2 and 4 weeks, 6, 12, and 18 months. Contrary to the FB hypothesis, I found that participants' FB's degraded over time beginning at least as early as two weeks post event. At about 12 months the memory trace stabilized, resisting further degradation. Repeated interviewing did not have any negative affect upon accuracy, contrary to concerns in the literature. Analysis by correlation and regression indicated no effect or predictive power for participant age, emotionality, confidence, or student status, as related to accuracy of recall; nor was participant confidence in accuracy predicted by emotional impact as hypothesized. Results also indicate that, contrary to the notions of investigators and jurors, witnesses become more inaccurate over time regardless of their confidence in their memories, even for highly emotional events. ^
Resumo:
Twenty-four manganese nodules from the surface of the sea floor and fifteen buried nodules were studied. With three exceptions, the nodules were collected from the area covered by Valdivia Cruise VA 04 some 1200 nautical miles southeast of Hawaii. Age determinations were made using the ionium method. In order to get a true reproduction of the activity distribution in the nodules, they were cut in half and placed for one month on nuclear emulsion plates to determine the alpha-activity of the ionium and its daughter products. Special methods of counting the alpha-tracks resolution to depth intervals of 0.125 mm. For the first time it was possible to resolve zones of rapid growth (impulse growth) with growth rates, s > 50 mm/106 yr and interruptions in growth. With few exceptions the average rate of growth of all nodules was surprisingly uniform at 4-9 mm/10 yr. No growth could be recognized radioactively in the buried nodules. One exceptional nodule has had recent impulse growth and, in the material formed, the ionium is not yet in equilibrium with its daughter products. Individual layers in one nodule from the Indian Ocean could be dated and an average time interval of t = 2600±400 yr was necessary to form one layer. The alternation between iron and manganese-rich parts of the nodules was made visible by colour differences resulting from special treatment of cut surfaces with HCl vapour. The zones of slow growth of one nodule are relatively enriched in iron. Earlier attempts to find paleomagnetic reversals in manganese nodules have been continued. Despite considerable improvement in areal resolution, reversals were not detected in the nodules studied. Comparisons of the surface structure, microstructure in section and the radiometric dating show that there are erosion surfaces and growth surfaces on the outer surfaces of the manganese nodules. The formation of cracks in the nodules was studied in particular. The model of age-dependent nodule shrinkage and cracking surprisingly indicates that the nodules break after exceeding a certain age and/or size. Consequently, the breaking apart of manganese nodules is a continuous process not of catastrophic or discontinuous origin. The microstructure of the nodules exhibits differences in the mechanism of accretion and accretion rate of material, shortly referred to as accretion form. Thus non-directional growth inside the nodules as well as a directional growth may be observed. Those nodules with large accretion forms have grown faster than smaller ones. Consequently, parallel layers indicate slow growth. The upper surfaces of the nodules, protruding into the bottom water appear to be more prone to growth disturbances than the lower surfaces, immersed in the sediment. Features of some nodules show, that as they develop, they neither turned nor rolled. Yet unknown is the mechanism that keeps the nodules at the surface during continuous sedimentation. All in all, the nodules remain the objects of their own distinctive problems. The hope of using them as a kind of history book still seems to be very remote.
Resumo:
Time-series of varve properties and geochemistry were established from varved sediments of Lake Woserin (north-eastern Germany) covering the recent period AD 2010-1923 and the Mid-Holocene time-window 6400-4950 varve years before present (vyr BP) using microfacies analyses, X-ray fluorescence scanning (µ-XRF), microscopic varve chronology and 14C dating. The microscopic varve chronology was compared to a macroscopic varve chronology for the same sediment interval. Calcite layer thickness during the recent period is significantly correlated to increases in local annual precipitation (r=0.46, p=0.03) and reduced air-pressure (r=-0.72, p<0.0001). Meteorologically consistent with enhanced precipitation at Lake Woserin, a composite 500 hPa anomaly map for years with >1 standard deviation calcite layer thickness depicts a negative wave train air-pressure anomaly centred over southern Europe, with north-eastern Germany at its northern frontal zone. Three centennial-scale intervals of thicker calcite layers around the Mid-Holocene periods 6200-5900, 5750-5400 and 5300-4950 vyr BP might reflect humid conditions favouring calcite precipitation through the transport of Ca2+ ions into Lake Woserin, synchronous to wetter conditions in Europe. Calcite layer thickness oscillations of about 88 and 208 years resemble the solar Gleissberg and Suess cycles suggesting that the recorded hydroclimate changes in north-eastern Germany are modified by solar influences on synoptic-scale atmospheric circulation. However, parts of the periods of thicker calcite layers around 5750-5400 and 5200 vyr BP also coincide to enhanced human catchment activity at Lake Woserin. Therefore, calcite precipitation during these time-windows might have further been favored by anthropogenic deforestation mobilizing Ca2+ ions and/or lake eutrophication.
Resumo:
Variations in the sediment input to the Namaqualand mudbelt during the Holocene are assessed using an integrative terrestrial to marine, source to sink approach. Geochemical and Sr and Nd isotopic signatures are used to distinguish fluvial sediment source areas. Relative to the sediments of the Olifants River, craton outcrops in the northern Orange River catchment have a more radiogenic Sr and a more unradiogenic Nd isotopic signature. Furthermore, upper Orange River sediments are rich in heavier elements such as Ti and Fe derived from the chemical weathering of Drakensberg flood basalt. Suspension load signatures change along the Orange River's westward transit as northern catchments contribute physical weathering products from the Fish and Molopo River catchment area. Marine cores offshore of the Olifants (GeoB8323-2) and Orange (GeoB8331-4) River mouths show pulses of increased contribution of Olifants River and upper Orange River input, respectively. These pulses coincide with intervals of increased terrestrial organic matter flux and increased paleo-production at the respective core sites. We attribute this to an increase in fluvial activity and vegetation cover in the adjacent catchments during more humid climate conditions. The contrast in the timing of these wet phases in the catchment areas reflects the bipolar behavior of the South African summer and winter rainfall zones. While rainfall in the Orange River catchment is related to southward shifts in the ICTZ, rainfall in the Olifants catchment is linked to northward shifts in Southern Hemisphere Westerly storm tracks. The later may also have increased southern Benguela upwelling in the past by reducing the shedding of Agulhas eddies into the Atlantic. The high-resolution records of latitudinal shifts in these atmospheric circulation systems correspond to late Holocene centennial-millennial scale climate variability evident in Antarctic ice core records. The mudbelt cores indicate that phases of high summer rainfall zone and low winter rainfall zone humidity (at ca. 2.8 and 1 ka BP) may be synchronous with Antarctic warming events. On the other hand, dry conditions in the summer rainfall zone along with wet conditions in the winter rainfall zone (at ca 3.3, 2 and 0.5 ka BP) may be associated with Antarctic cooling events.
Resumo:
Distribution of diatoms and planktonic and benthic foraminifers, as well as correlation of components of sandy grain size fraction were studied in the Quaternary sediment core LV28-42-5 (720 cm long) col¬lected on the southeastern slope (1045 m depth) of the Institute of Oceanology Rise, Sea of Okhotsk. This study allowed to reconstruct principle features of paleoceanographic evolution. In the course of penultimate and last continental glaciations (isotope stages 6 and 4-2) and during the later period of the last interglacial (substages 5.d-5.a) the following conditions were characteristic of this area: low temperatures of surface water, terrigenous sediment accumulation including coarse grained ice-rafted material, minimum bioproductivity and microfossil content in sediments, low sea level, reduced water exchange with the ocean, low position of old deep Pacific water. During the interglacial optimum (substage 5.e), as well as in the last deglaciation and Holocene (stage 1) water temperature and bioproductivity increased, sea level rose, and active surface water exchange between the Sea of Okhotsk and the Pacific Ocean and the Sea of Japan took place. This resulted in intensive inflow of the old deep Pacific water into the Sea of Okhotsk and elevation of its upper boundary by few hundred meters. During the later intervals of these warm periods a dichothermal structure of the upper water layer formed and diatom oozes accumulated.