995 resultados para Discrete events
Resumo:
The terrorist attacks in the United States on September 11, 2001 appeared to be a harbinger of increased terrorism and violence in the 21st century, bringing terrorism and political violence to the forefront of public discussion. Questions about these events abound, and “Estimating the Historical and Future Probabilities of Large Scale Terrorist Event” [Clauset and Woodard (2013)] asks specifically, “how rare are large scale terrorist events?” and, in general, encourages discussion on the role of quantitative methods in terrorism research and policy and decision-making. Answering the primary question raises two challenges. The first is identify- ing terrorist events. The second is finding a simple yet robust model for rare events that has good explanatory and predictive capabilities. The challenges of identifying terrorist events is acknowledged and addressed by reviewing and using data from two well-known and reputable sources: the Memorial Institute for the Prevention of Terrorism-RAND database (MIPT-RAND) [Memorial Institute for the Prevention of Terrorism] and the Global Terror- ism Database (GTD) [National Consortium for the Study of Terrorism and Responses to Terrorism (START) (2012), LaFree and Dugan (2007)]. Clauset and Woodard (2013) provide a detailed discussion of the limitations of the data and the models used, in the context of the larger issues surrounding terrorism and policy.
Resumo:
To this day, realizations in the standard-model of (lossy) trapdoor functions from discrete-log-type assumptions require large public key sizes, e.g., about Θ(λ 2) group elements for a reduction from the decisional Diffie-Hellman assumption (where λ is a security parameter). We propose two realizations of lossy trapdoor functions that achieve public key size of only Θ(λ) group elements in bilinear groups, with a reduction from the decisional Bilinear Diffie-Hellman assumption. Our first construction achieves this result at the expense of a long common reference string of Θ(λ 2) elements, albeit reusable in multiple LTDF instantiations. Our second scheme also achieves public keys of size Θ(λ), entirely in the standard model and in particular without any reference string, at the cost of a slightly more involved construction. The main technical novelty, developed for the second scheme, is a compact encoding technique for generating compressed representations of certain sequences of group elements for the public parameters.
Resumo:
Twitter is the focus of much research attention, both in traditional academic circles and in commercial market and media research, as analytics give increasing insight into the performance of the platform in areas as diverse as political communication, crisis management, television audiencing and other industries. While methods for tracking Twitter keywords and hashtags have developed apace and are well documented, the make-up of the Twitter user base and its evolution over time have been less understood to date. Recent research efforts have taken advantage of functionality provided by Twitter's Application Programming Interface to develop methodologies to extract information that allows us to understand the growth of Twitter, its geographic spread and the processes by which particular Twitter users have attracted followers. From politicians to sporting teams, and from YouTube personalities to reality television stars, this technique enables us to gain an understanding of what prompts users to follow others on Twitter. This article outlines how we came upon this approach, describes the method we adopted to produce accession graphs and discusses their use in Twitter research. It also addresses the wider ethical implications of social network analytics, particularly in the context of a detailed study of the Twitter user base.
Resumo:
Existing techniques for automated discovery of process models from event logs largely focus on extracting flat process models. In other words, they fail to exploit the notion of subprocess, as well as structured error handling and repetition constructs provided by contemporary process modeling notations, such as the Business Process Model and Notation (BPMN). This paper presents a technique for automated discovery of BPMN models containing subprocesses, interrupting and non-interrupting boundary events, and loop and multi-instance markers. The technique analyzes dependencies between data attributes associated with events, in order to identify subprocesses and to extract their associated logs. Parent process and subprocess models are then discovered separately using existing techniques for flat process model discovery. Finally, the resulting models and logs are heuristically analyzed in order to identify boundary events and markers. A validation with one synthetic and two real-life logs shows that process models derived using the proposed technique are more accurate and less complex than those derived with flat process model discovery techniques.
Resumo:
The life history strategies of massive Porites corals make them a valuable resource not only as key providers of reef structure, but also as recorders of past environmental change. Yet recent documented evidence of an unprecedented increase in the frequency of mortality in Porites warrants investigation into the history of mortality and associated drivers. To achieve this, both an accurate chronology and an understanding of the life history strategies of Porites are necessary. Sixty-two individual Uranium–Thorium (U–Th) dates from 50 dead massive Porites colonies from the central inshore region of the Great Barrier Reef (GBR) revealed the timing of mortality to have occurred predominantly over two main periods from 1989.2 ± 4.1 to 2001.4 ± 4.1, and from 2006.4 ± 1.8 to 2008.4 ± 2.2 A.D., with a small number of colonies dating earlier. Overall, the peak ages of mortality are significantly correlated with maximum sea-surface temperature anomalies. Despite potential sampling bias, the frequency of mortality increased dramatically post-1980. These observations are similar to the results reported for the Southern South China Sea. High resolution measurements of Sr/Ca and Mg/Ca obtained from a well preserved sample that died in 1994.6 ± 2.3 revealed that the time of death occurred at the peak of sea surface temperatures (SST) during the austral summer. In contrast, Sr/Ca and Mg/Ca analysis in two colonies dated to 2006.9 ± 3.0 and 2008.3 ± 2.0, suggest that both died after the austral winter. An increase in Sr/Ca ratios and the presence of low Mg-calcite cements (as determined by SEM and elemental ratio analysis) in one of the colonies was attributed to stressful conditions that may have persisted for some time prior to mortality. For both colonies, however, the timing of mortality coincides with the 4th and 6th largest flood events reported for the Burdekin River in the past 60 years, implying that factors associated with terrestrial runoff may have been responsible for mortality. Our results show that a combination of U–Th and elemental ratio geochemistry can potentially be used to precisely and accurately determine the timing and season of mortality in modern massive Porites corals. For reefs where long-term monitoring data are absent, the ability to reconstruct historical events in coral communities may prove useful to reef managers by providing some baseline knowledge on disturbance history and associated drivers.
Resumo:
This paper uses a correlated multinomial logit model and a Poisson regression model to measure the factors affecting demand for different types of transportation by elderly and disabled people in rural Virginia. The major results are: (a) A paratransit system providing door-to-door service is highly valued by transportation-handicapped people; (b) Taxis are probably a potential but inferior alternative even when subsidized; (c) Buses are a poor alternative, especially in rural areas where distances to bus stops may be long; (d) Making buses handicap-accessible would have a statistically significant but small effect on mode choice; (e) Demand is price inelastic; and (f) The total number of trips taken is insensitive to mode availability and characteristics. These results suggest that transportation-handicapped people take a limited number of trips. Those they do take are in some sense necessary (given the low elasticity with respect to mode price or availability). People will substitute away from relying upon others when appropriate transportation is available, at least to some degree. But such transportation needs to be flexible enough to meet the needs of the people involved.
Resumo:
This paper considers two problems that frequently arise in dynamic discrete choice problems but have not received much attention with regard to simulation methods. The first problem is how to simulate unbiased simulators of probabilities conditional on past history. The second is simulating a discrete transition probability model when the underlying dependent variable is really continuous. Both methods work well relative to reasonable alternatives in the application discussed. However, in both cases, for this application, simpler methods also provide reasonably good results.
Resumo:
This paper demonstrates the use of a spreadsheet in exploring non-linear difference equations that describe digital control systems used in radio engineering, communication and computer architecture. These systems, being the focus of intensive studies of mathematicians and engineers over the last 40 years, may exhibit extremely complicated behaviour interpreted in contemporary terms as transition from global asymptotic stability to chaos through period-doubling bifurcations. The authors argue that embedding advanced mathematical ideas in the technological tool enables one to introduce fundamentals of discrete control systems in tertiary curricula without learners having to deal with complex machinery that rigorous mathematical methods of investigation require. In particular, in the appropriately designed spreadsheet environment, one can effectively visualize a qualitative difference in the behviour of systems with different types of non-linear characteristic.
Resumo:
Low-temperature plasmas in direct contact with arbitrary, written linear features on a Si wafer enable catalyst-free integration of carbon nanotubes into a Si-based nanodevice platform and in situ resolution of individual nucleation events. The graded nanotube arrays show reliable, reproducible, and competitive performance in electron field emission and biosensing nanodevices.
Resumo:
Cluster ions and charged and neutral nanoparticle concentrations were monitored using a neutral cluster and air ion spectrometer (NAIS) over a period of one year in Brisbane, Australia. The study yielded 242 complete days of usable data, of which particle formation events were observed on 101 days. Small, intermediate and large ion concentrations were evaluated in real time. In the diurnal cycle, small ion concentration was highest during the second half of the night while large ion concentrations were a maximum during the day. The small ion concentration showed a decrease when the large ion concentration increased. Particle formation was generally followed by a peak in the intermediate ion concentration. The rate of increase of intermediate ions was used as the criteria for identifying particle formation events. Such events were followed by a period of growth to larger sizes and usually occurred between 8 am and 2 pm. Particle formation events were found to be related to the wind direction. The gaseous precursors for the production of secondary particles in the urban environment of Brisbane have been shown to be ammonia and sulfuric acid. During these events, the nanoparticle number concentrations in the size range 1.6 to 42 nm, which were normally lower than 1x104 cm-3, often exceeded 5x104 cm-3 with occasional values over 1x105 cm-3. Cluster ions generally occurred in number concentrations between 300 and 600 cm-3 but decreased significantly to about 200 cm-3 during particle formation events. This was accompanied by an increase in the large ion concentration. We calculated the fraction of nanoparticles that were charged and investigated the occurrence of possible overcharging during particle formation events. Overcharging is defined as the condition where the charged fraction of particles is higher than in charge equilibrium. This can occur when cluster ions attach to neutral particles in the atmosphere, giving rise to larger concentrations of charged particles in the short term. Ion-induced nucleation is one of the mechanisms of particle formation in the atmosphere, and overcharging has previously been considered as an indicator of this process. The possible role of ions in particle formation was investigated.
Resumo:
We present new evidence for sector collapses of the South Soufrière Hills (SSH) edifice, Montserrat during the mid-Pleistocene. High-resolution geophysical data provide evidence for sector collapse, producing an approximately 1 km3 submarine collapse deposit to the south of SSH. Sedimentological and geochemical analyses of submarine deposits sampled by sediment cores suggest that they were formed by large multi-stage flank failures of the subaerial SSH edifice into the sea. This work identifies two distinct geochemical suites within the SSH succession on the basis of trace-element and Pb-isotope compositions. Volcaniclastic turbidites in the cores preserve these chemically heterogeneous rock suites. However, the subaerial chemostratigraphy is reversed within the submarine sediment cores. Sedimentological analysis suggests that the edifice failures produced high-concentration turbidites and that the collapses occurred in multiple stages, with an interval of at least 2 ka between the first and second failure. Detailed field and petrographical observations, coupled with SEM image analysis, shows that the SSH volcanic products preserve a complex record of magmatic activity. This activity consisted of episodic explosive eruptions of andesitic pumice, probably triggered by mafic magmatic pulses and followed by eruptions of poorly vesiculated basaltic scoria, and basaltic lava flows.
Resumo:
Background: Cancer metastasis is the main contributor to breast cancer fatalities as women with the metastatic disease have poorer survival outcomes than women with localised breast cancers. There is an urgent need to develop appropriate prognostic methods to stratify patients based on the propensities of their cancers to metastasise. The insulin-like growth factor (IGF)-I:IGF binding protein (IGFBP):vitronectin complexes have been shown to stimulate changes in gene expression favouring increased breast cancer cell survival and a migratory phenotype. We therefore investigated the prognostic potential of these IGF- and extracellular matrix (ECM) interaction-induced proteins in the early identification of breast cancers with a propensity to metastasise using patient-derived tissue microarrays. Methods: Semiquantitative immunohistochemistry analyses were performed to compare the extracellular and subcellular distribution of IGF- and ECM-induced signalling proteins among matched normal, primary cancer and metastatic cancer formalin-fixed paraffin-embedded breast tissue samples. Results: The IGF- and ECM-induced signalling proteins were differentially expressed between subcellular and extracellular localisations. Vitronectin and IGFBP-5 immunoreactivity was lower while β1 integrin immunoreactivity was higher in the stroma surrounding metastatic cancer tissues, as compared to normal breast and primary cancer stromal tissues. Similarly, immunoreactive stratifin was found to be increased in the stroma of primary as well as metastatic breast tissues. Immunoreactive fibronectin and β1 integrin was found to be highly expressed at the leading edge of tumours. Based on the immunoreactivity it was apparent that the cell signalling proteins AKT1 and ERK1/2 shuffled from the nucleus to the cytoplasm with tumour progression. Conclusion: This is the first in-depth, compartmentalised analysis of the distribution of IGF- and ECM-induced signalling proteins in metastatic breast cancers. This study has provided insights into the changing pattern of cellular localisation and expression of IGF- and ECM-induced signalling proteins in different stages of breast cancer. The differential distribution of these biomarkers could provide important prognostic and predictive indicators that may assist the clinical management of breast disease, namely in the early identification of cancers with a propensity to metastasise, and/or recur following adjuvant therapy.
Resumo:
Objective To evaluate methods for monitoring monthly aggregated hospital adverse event data that display clustering, non-linear trends and possible autocorrelation. Design Retrospective audit. Setting The Northern Hospital, Melbourne, Australia. Participants 171,059 patients admitted between January 2001 and December 2006. Measurements The analysis is illustrated with 72 months of patient fall injury data using a modified Shewhart U control chart, and charts derived from a quasi-Poisson generalised linear model (GLM) and a generalised additive mixed model (GAMM) that included an approximate upper control limit. Results The data were overdispersed and displayed a downward trend and possible autocorrelation. The downward trend was followed by a predictable period after December 2003. The GLM-estimated incidence rate ratio was 0.98 (95% CI 0.98 to 0.99) per month. The GAMM-fitted count fell from 12.67 (95% CI 10.05 to 15.97) in January 2001 to 5.23 (95% CI 3.82 to 7.15) in December 2006 (p<0.001). The corresponding values for the GLM were 11.9 and 3.94. Residual plots suggested that the GLM underestimated the rate at the beginning and end of the series and overestimated it in the middle. The data suggested a more rapid rate fall before 2004 and a steady state thereafter, a pattern reflected in the GAMM chart. The approximate upper two-sigma equivalent control limit in the GLM and GAMM charts identified 2 months that showed possible special-cause variation. Conclusion Charts based on GAMM analysis are a suitable alternative to Shewhart U control charts with these data.
Resumo:
Age-related macular degeneration (AMD) affects the central vision and subsequently may lead to visual loss in people over 60 years of age. There is no permanent cure for AMD, but early detection and successive treatment may improve the visual acuity. AMD is mainly classified into dry and wet type; however, dry AMD is more common in aging population. AMD is characterized by drusen, yellow pigmentation, and neovascularization. These lesions are examined through visual inspection of retinal fundus images by ophthalmologists. It is laborious, time-consuming, and resource-intensive. Hence, in this study, we have proposed an automated AMD detection system using discrete wavelet transform (DWT) and feature ranking strategies. The first four-order statistical moments (mean, variance, skewness, and kurtosis), energy, entropy, and Gini index-based features are extracted from DWT coefficients. We have used five (t test, Kullback–Lieber Divergence (KLD), Chernoff Bound and Bhattacharyya Distance, receiver operating characteristics curve-based, and Wilcoxon) feature ranking strategies to identify optimal feature set. A set of supervised classifiers namely support vector machine (SVM), decision tree, k -nearest neighbor ( k -NN), Naive Bayes, and probabilistic neural network were used to evaluate the highest performance measure using minimum number of features in classifying normal and dry AMD classes. The proposed framework obtained an average accuracy of 93.70 %, sensitivity of 91.11 %, and specificity of 96.30 % using KLD ranking and SVM classifier. We have also formulated an AMD Risk Index using selected features to classify the normal and dry AMD classes using one number. The proposed system can be used to assist the clinicians and also for mass AMD screening programs.