961 resultados para Rainflow counting
Resumo:
We analyze photoionization and ion detection as a means of accurately counting ultracold atoms. We show that it is possible to count clouds containing many thousands of atoms with accuracies better than N-1/2 with current technology. This allows the direct probing of sub-Poissonian number statistics of atomic samples. The scheme can also be used for efficient single-atom detection with high spatiotemporal resolution. All aspects of a realistic detection scheme are considered, and we discuss experimental situations in which such a scheme could be implemented.
Resumo:
We study Greenberger-Horne-Zeilinger-type (GHZ-type) and W-type three-mode entangled coherent states. Both types of entangled coherent states violate Mermin's version of the Bell inequality with threshold photon detection (i.e., without photon counting). Such an experiment can be performed using linear optics elements and threshold detectors with significant Bell violations for GHZ-type entangled coherent states. However, to demonstrate Bell-type inequality violations for W-type entangled coherent states, additional nonlinear interactions are needed. We also propose an optical scheme to generate W-type entangled coherent states in free-traveling optical fields. The required resources for the generation are a single-photon source, a coherent state source, beam splitters, phase shifters, photodetectors, and Kerr nonlinearities. Our scheme does not necessarily require strong Kerr nonlinear interactions; i.e., weak nonlinearities can be used for the generation of the W-type entangled coherent states. Furthermore, it is also robust against inefficiencies of the single-photon source and the photon detectors.
Resumo:
Proceedings of the 11th Australasian Remote Sensing and Photogrammetry Conference
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
In the present paper the results from designing of device, which is a part of the automated information system for counting, reporting and documenting the quantity of produced bottles in a factory for glass processing are presented. The block diagram of the device is given. The introduced system can be applied in other discrete productions for counting of the quantity of bottled production.
Resumo:
In this paper, we present an innovative topic segmentation system based on a new informative similarity measure that takes into account word co-occurrence in order to avoid the accessibility to existing linguistic resources such as electronic dictionaries or lexico-semantic databases such as thesauri or ontology. Topic segmentation is the task of breaking documents into topically coherent multi-paragraph subparts. Topic segmentation has extensively been used in information retrieval and text summarization. In particular, our architecture proposes a language-independent topic segmentation system that solves three main problems evidenced by previous research: systems based uniquely on lexical repetition that show reliability problems, systems based on lexical cohesion using existing linguistic resources that are usually available only for dominating languages and as a consequence do not apply to less favored languages and finally systems that need previously existing harvesting training data. For that purpose, we only use statistics on words and sequences of words based on a set of texts. This solution provides a flexible solution that may narrow the gap between dominating languages and less favored languages thus allowing equivalent access to information.
Resumo:
In global policy documents, the language of Technology-Enhanced Learning (TEL) now firmly structures a perception of educational technology which ‘subsumes’ terms like Networked Learning and e-Learning. Embedded in these three words though is a deterministic, economic assumption that technology has now enhanced learning, and will continue to do so. In a market-driven, capitalist society this is a ‘trouble free’, economically focused discourse which suggests there is no need for further debate about what the use of technology achieves in learning. Yet this raises a problem too: if technology achieves goals for human beings, then in education we are now simply counting on ‘use of technology’ to enhance learning. This closes the door on a necessary and ongoing critical pedagogical conversation that reminds us it is people that design learning, not technology. Furthermore, such discourse provides a vehicle for those with either strong hierarchical, or neoliberal agendas to make simplified claims politically, in the name of technology. This chapter is a reflection on our use of language in the educational technology community through a corpus-based Critical Discourse Analysis (CDA). In analytical examples that are ‘loaded’ with economic expectation, we can notice how the policy discourse of TEL narrows conversational space for learning so that people may struggle to recognise their own subjective being in this language. Through the lens of Lieras’s externality, desubjectivisation and closure (Lieras, 1996) we might examine possible effects of this discourse and seek a more emancipatory approach. A return to discussing Networked Learning is suggested, as a first step towards a more multi-directional conversation than TEL, that acknowledges the interrelatedness of technology, language and learning in people’s practice. Secondly, a reconsideration of how we write policy for educational technology is recommended, with a critical focus on how people learn, rather than on what technology is assumed to enhance.
Resumo:
We develop a simplified implementation of the Hoshen-Kopelman cluster counting algorithm adapted for honeycomb networks. In our implementation of the algorithm we assume that all nodes in the network are occupied and links between nodes can be intact or broken. The algorithm counts how many clusters there are in the network and determines which nodes belong to each cluster. The network information is stored into two sets of data. The first one is related to the connectivity of the nodes and the second one to the state of links. The algorithm finds all clusters in only one scan across the network and thereafter cluster relabeling operates on a vector whose size is much smaller than the size of the network. Counting the number of clusters of each size, the algorithm determines the cluster size probability distribution from which the mean cluster size parameter can be estimated. Although our implementation of the Hoshen-Kopelman algorithm works only for networks with a honeycomb (hexagonal) structure, it can be easily changed to be applied for networks with arbitrary connectivity between the nodes (triangular, square, etc.). The proposed adaptation of the Hoshen-Kopelman cluster counting algorithm is applied to studying the thermal degradation of a graphene-like honeycomb membrane by means of Molecular Dynamics simulation with a Langevin thermostat. ACM Computing Classification System (1998): F.2.2, I.5.3.
Resumo:
In order to reconstruct regional vegetation changes and local conditions during the fen-bog transition in the Borsteler Moor (northwestern Germany), a sediment core covering the period between 7.1 and 4.5 cal kyrs BP was palynologically in vestigated. The pollen diagram demonstrates the dominance of oak forests and a gradual replacement of trees by raised bog vegetation with the wetter conditions in the Late Atlantic. At ~ 6 cal kyrs BP, the non-pollen palynomorphs (NPP) demonstrate the succession from mesotrophic conditions, clearly indicated by a number of fungal spore types, to oligotrophic conditions, indicated by Sphagnum spores, Bryophytomyces sphagni, and testate amoebae Amphitrema, Assulina and Arcella, etc. Four relatively dry phases during the transition from fen to bog are clearly indicated by the dominance of Calluna and associated fungi as well as by the increase of microcharcoal. Several new NPP types are described and known NPP types are identified. All NPP are discussed in the context of their palaeoecological indicator values.
Resumo:
Spectral CT using a photon counting x-ray detector (PCXD) shows great potential for measuring material composition based on energy dependent x-ray attenuation. Spectral CT is especially suited for imaging with K-edge contrast agents to address the otherwise limited contrast in soft tissues. We have developed a micro-CT system based on a PCXD. This system enables full spectrum CT in which the energy thresholds of the PCXD are swept to sample the full energy spectrum for each detector element and projection angle. Measurements provided by the PCXD, however, are distorted due to undesirable physical eects in the detector and are very noisy due to photon starvation. In this work, we proposed two methods based on machine learning to address the spectral distortion issue and to improve the material decomposition. This rst approach is to model distortions using an articial neural network (ANN) and compensate for the distortion in a statistical reconstruction. The second approach is to directly correct for the distortion in the projections. Both technique can be done as a calibration process where the neural network can be trained using 3D printed phantoms data to learn the distortion model or the correction model of the spectral distortion. This replaces the need for synchrotron measurements required in conventional technique to derive the distortion model parametrically which could be costly and time consuming. The results demonstrate experimental feasibility and potential advantages of ANN-based distortion modeling and correction for more accurate K-edge imaging with a PCXD. Given the computational eciency with which the ANN can be applied to projection data, the proposed scheme can be readily integrated into existing CT reconstruction pipelines.
Resumo:
The current crime decrease is defying traditional criminological theories such as those espoused by Bonger (1916) who researched the relationship between crime and economic conditions and stated that when unemployment rises so does crime. In both the USA and the UK crime has dropped dramatically while unemployment has risen. Both the USA and the UK have been in a deep recession since 2008 but the crime rate has decreased dramatically in both countries. Over the past 20 years it has halved in England and Wales. So how do we explain this phenomenon? Crime is down across the West but more so in Britain (see Figure 1). In England and Wales crime has decreased by 8% in a single year (2013). Vandalism is down by 14% and burglaries and vehicle crime by 11%. The murder rate in the UK is at its lowest since 1978; in 2013, 540 people were killed. Some less serious offences are vanishing too; antisocial behaviour has fallen from just under 4million incidents in 2007-08 to 2.4million. (The Economist 20/4/13). According to the most recent annual results from the Crime Survey for England and Wales (CSEW), crime is at its lowest level since the survey began in 1981; the most recent annual figures from the survey, Latest figures from the CSEW show there were an estimated 7.3 million incidents of crime against households and resident adults (aged 16 and over) in England and Wales for the year ending March 2014. This represents a 14% decrease compared with the previous year’s survey, and is the lowest estimate since the survey began in 1981.