948 resultados para Set of dimensions of fractality


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stigmergy is a biological term used when discussing a sub-set of insect swarm-behaviour describing the apparent organisation seen during their activities. Stigmergy describes a communication mechanism based on environment-mediated signals which trigger responses among the insects. This phenomenon is demonstrated in the behavior of ants and their food gathering process when following pheromone trails, where the pheromones are a form of environment-mediated communication. What is interesting with this phenomenon is that highly organized societies are achieved without an apparent management structure. Stigmergy is also observed in human environments, both natural and engineered. It is implicit in the Web where sites provide a virtual environment supporting coordinative contributions. Researchers in varying disciplines appreciate the power of this phenomenon and have studied how to exploit it. As stigmergy becomes more widely researched we see its definition mutate as papers citing original work become referenced themselves. Each paper interprets these works in ways very specific to the research being conducted. Our own research aims to better understand what improves the collaborative function of a Web site when exploiting the phenomenon. However when researching stigmergy to develop our understanding we discover a lack of a standardized and abstract model for the phenomenon. Papers frequently cited the same generic descriptions before becoming intimately focused on formal specifications of an algorithm, or esoteric discussions regarding sub-facets of the topic. None provide a holistic and macro-level view to model and standardize the nomenclature. This paper provides a content analysis of influential literature documenting the numerous theoretical and experimental papers that have focused on stigmergy. We establish that stigmergy is a phenomenon that transcends the insect world and is more than just a metaphor when applied to the human world. We present from our own research our general theory and abstract model of semantics of stigma in stigmergy. We hope our model will clarify the nuances of the phenomenon into a useful road-map, and standardise vocabulary that we witness becoming confused and divergent. Furthermore, this paper documents the analysis on which we base our next paper: Special Theory of Stigmergy: A Design Pattern for Web 2.0 Collaboration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation in cloud computing. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NP-complete. Thus, in this paper we propose a new heuristic algorithm for the mappers/reducers placement problem in cloud computing and evaluate it by comparing with other several heuristics on solution quality and computation time by solving a set of test problems with various characteristics. The computational results show that our heuristic algorithm is much more efficient than the other heuristics and it can obtain a better solution in a reasonable time. Furthermore, we verify the effectiveness of our heuristic algorithm by comparing the mapper/reducer placement for a benchmark problem generated by our heuristic algorithm with a conventional mapper/reducer placement which puts a fixed number of mapper/reducer on each machine. The comparison results show that the computation using our mapper/reducer placement is much cheaper than the computation using the conventional placement while still satisfying the computation deadline.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims This paper is a report on the effectiveness of a self-management programme based on the self-efficacy construct, in older people with heart failure. Background Heart failure is a major health problem worldwide, with high mortality and morbidity, making it a leading cause of hospitalization. Heart failure is associated with a complex set of symptoms that arise from problems in fluid and sodium retention. Hence, managing salt and fluid intake is important and can be enhanced by improving patients' self-efficacy in changing their behaviour. Design Randomized controlled trial. Methods Heart failure patients attending cardiac clinics in northern Taiwan from October 2006–May 2007 were randomly assigned to two groups: control (n = 46) and intervention (n = 47). The intervention group received a 12-week self-management programme that emphasized self-monitoring of salt/fluid intake and heart failure-related symptoms. Data were collected at baseline as well as 4 and 12 weeks later. Data analysis to test the hypotheses used repeated-measures anova models. Results Participants who received the intervention programme had significantly better self-efficacy for salt and fluid control, self-management behaviour and their heart failure-related symptoms were significantly lower than participants in the control group. However, the two groups did not differ significantly in health service use. Conclusion The self-management programme improved self-efficacy for salt and fluid control, self-management behaviours, and decreased heart failure-related symptoms in older Taiwanese outpatients with heart failure. Nursing interventions to improve health-related outcomes for patients with heart failure should emphasize self-efficacy in the self-management of their disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dendritic cells (DCs) play critical roles in immune-mediated kidney diseases. Little is known, however, about DC subsets in human chronic kidney disease, with previous studies restricted to a limited set of pathologies and to using immunohistochemical methods. In this study, we developed novel protocols for extracting renal DC subsets from diseased human kidneys and identified, enumerated, and phenotyped them by multicolor flow cytometry. We detected significantly greater numbers of total DCs as well as CD141(hi) and CD1c(+) myeloid DC (mDCs) subsets in diseased biopsies with interstitial fibrosis than diseased biopsies without fibrosis or healthy kidney tissue. In contrast, plasmacytoid DC numbers were significantly higher in the fibrotic group compared with healthy tissue only. Numbers of all DC subsets correlated with loss of kidney function, recorded as estimated glomerular filtration rate. CD141(hi) DCs expressed C-type lectin domain family 9 member A (CLEC9A), whereas the majority of CD1c(+) DCs lacked the expression of CD1a and DC-specific ICAM-3-grabbing nonintegrin (DC-SIGN), suggesting these mDC subsets may be circulating CD141(hi) and CD1c(+) blood DCs infiltrating kidney tissue. Our analysis revealed CLEC9A(+) and CD1c(+) cells were restricted to the tubulointerstitium. Notably, DC expression of the costimulatory and maturation molecule CD86 was significantly increased in both diseased cohorts compared with healthy tissue. Transforming growth factor-β levels in dissociated tissue supernatants were significantly elevated in diseased biopsies with fibrosis compared with nonfibrotic biopsies, with mDCs identified as a major source of this profibrotic cytokine. Collectively, our data indicate that activated mDC subsets, likely recruited into the tubulointerstitium, are positioned to play a role in the development of fibrosis and, thus, progression to chronic kidney disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radical-directed dissociation of gas phase ions is emerging as a powerful and complementary alternative to traditional tandem mass spectrometric techniques for biomolecular structural analysis. Previous studies have identified that coupling of 2-[(2,2,6,6-tetramethylpiperidin-1-oxyl)methyl] benzoic acid (TEMPO-Bz) to the N-terminus of a peptide introduces a labile oxygen-carbon bond that can be selectively activated upon collisional activation to produce a radical ion. Here we demonstrate that structurally-defined peptide radical ions can also be generated upon UV laser photodissociation of the same TEMPO-Bz derivatives in a linear ion-trap mass spectrometer. When subjected to further mass spectrometric analyses, the radical ions formed by a single laser pulse undergo identical dissociations as those formed by collisional activation of the same precursor ion, and can thus be used to derive molecular structure. Mapping the initial radical formation process as a function of photon energy by photodissociation action spectroscopy reveals that photoproduct formation is selective but occurs only in modest yield across the wavelength range (300-220 nm), with the photoproduct yield maximised between 235 and 225 nm. Based on the analysis of a set of model compounds, structural modifications to the TEMPO-Bz derivative are suggested to optimise radical photoproduct yield. Future development of such probes offers the advantage of increased sensitivity and selectivity for radical-directed dissociation. © 2014 the Owner Societies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study used automated data processing techniques to calculate a set of novel treatment plan accuracy metrics, and investigate their usefulness as predictors of quality assurance (QA) success and failure. 151 beams from 23 prostate and cranial IMRT treatment plans were used in this study. These plans had been evaluated before treatment using measurements with a diode array system. The TADA software suite was adapted to allow automatic batch calculation of several proposed plan accuracy metrics, including mean field area, small-aperture, off-axis and closed-leaf factors. All of these results were compared the gamma pass rates from the QA measurements and correlations were investigated. The mean field area factor provided a threshold field size (5 cm2, equivalent to a 2.2 x 2.2 cm2 square field), below which all beams failed the QA tests. The small aperture score provided a useful predictor of plan failure, when averaged over all beams, despite being weakly correlated with gamma pass rates for individual beams. By contrast, the closed leaf and off-axis factors provided information about the geometric arrangement of the beam segments but were not useful for distinguishing between plans that passed and failed QA. This study has provided some simple tests for plan accuracy, which may help minimise time spent on QA assessments of treatments that are unlikely to pass.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction When it comes to sustainable economic development, it is hard to go past the thought of investment in information technology (IT). The foundation of sustainable economic development is sustainable infrastructure. This situation means that investment in IT is about developing sustainable IT infrastructure. An IT infrastructure is a set of IT tools on which organisations could develop applications to manage their varying business processes. At a national economic level, this is all about developing a national IT infrastructure to provide social and economic services to the various stakeholders. Current troubling economic times call for collaboration and centrality in IT infrastructure development. This notion has led to the idea of national broadband networks, sustainable telecommunication platforms, and national IT development plans and goals. However, these thoughts and actions do not directly impact the critical social and economic processes of organisations. That is, these thoughts set the tone and direction of actions

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction This study examines and compares the dosimetric quality of radiotherapy treatment plans for prostate carcinoma across a cohort of 163 patients treated across 5 centres: 83 treated with three-dimensional conformal radiotherapy (3DCRT), 33 treated with intensity-modulated radiotherapy (IMRT) and 47 treated with volumetric-modulated arc therapy (VMAT). Methods Treatment plan quality was evaluated in terms of target dose homogeneity and organ-at-risk sparing, through the use of a set of dose metrics. These included the mean, maximum and minimum doses; the homogeneity and conformity indices for the target volumes; and a selection of dose coverage values that were relevant to each organ-at-risk. Statistical significance was evaluated using two-tailed Welch’s T-tests. The Monte Carlo DICOM ToolKit software was adapted to permit the evaluation of dose metrics from DICOM data exported from a commercial radiotherapy treatment planning system. Results The 3DCRT treatment plans offered greater planning target volume dose homogeneity than the other two treatment modalities. The IMRT and VMAT plans offered greater dose reduction in the organs-at-risk: with increased compliance with recommended organ-at-risk dose constraints, compared to conventional 3DCRT treatments. When compared to each other, IMRT and VMAT did not provide significantly different treatment plan quality for like-sized tumour volumes. Conclusions This study indicates that IMRT and VMAT have provided similar dosimetric quality, which is superior to the dosimetric quality achieved with 3DCRT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To develop a system for the automatic classification of pathology reports for Cancer Registry notifications. Method: A two pass approach is proposed to classify whether pathology reports are cancer notifiable or not. The first pass queries pathology HL7 messages for known report types that are received by the Queensland Cancer Registry (QCR), while the second pass aims to analyse the free text reports and identify those that are cancer notifiable. Cancer Registry business rules, natural language processing and symbolic reasoning using the SNOMED CT ontology were adopted in the system. Results: The system was developed on a corpus of 500 histology and cytology reports (with 47% notifiable reports) and evaluated on an independent set of 479 reports (with 52% notifiable reports). Results show that the system can reliably classify cancer notifiable reports with a sensitivity, specificity, and positive predicted value (PPV) of 0.99, 0.95, and 0.95, respectively for the development set, and 0.98, 0.96, and 0.96 for the evaluation set. High sensitivity can be achieved at a slight expense in specificity and PPV. Conclusion: The system demonstrates how medical free-text processing enables the classification of cancer notifiable pathology reports with high reliability for potential use by Cancer Registries and pathology laboratories.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an approach to automatically de-identify health records. In our approach, personal health information is identified using a Conditional Random Fields machine learning classifier, a large set of linguistic and lexical features, and pattern matching techniques. Identified personal information is then removed from the reports. The de-identification of personal health information is fundamental for the sharing and secondary use of electronic health records, for example for data mining and disease monitoring. The effectiveness of our approach is first evaluated on the 2007 i2b2 Shared Task dataset, a widely adopted dataset for evaluating de-identification techniques. Subsequently, we investigate the robustness of the approach to limited training data; we study its effectiveness on different type and quality of data by evaluating the approach on scanned pathology reports from an Australian institution. This data contains optical character recognition errors, as well as linguistic conventions that differ from those contained in the i2b2 dataset, for example different date formats. The findings suggest that our approach compares to the best approach from the 2007 i2b2 Shared Task; in addition, the approach is found to be robust to variations of training size, data type and quality in presence of sufficient training data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, a convex hull-based human identification protocol was proposed by Sobrado and Birget, whose steps can be performed by humans without additional aid. The main part of the protocol involves the user mentally forming a convex hull of secret icons in a set of graphical icons and then clicking randomly within this convex hull. While some rudimentary security issues of this protocol have been discussed, a comprehensive security analysis has been lacking. In this paper, we analyze the security of this convex hull-based protocol. In particular, we show two probabilistic attacks that reveal the user’s secret after the observation of only a handful of authentication sessions. These attacks can be efficiently implemented as their time and space complexities are considerably less than brute force attack. We show that while the first attack can be mitigated through appropriately chosen values of system parameters, the second attack succeeds with a non-negligible probability even with large system parameter values that cross the threshold of usability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Classical results in unconditionally secure multi-party computation (MPC) protocols with a passive adversary indicate that every n-variate function can be computed by n participants, such that no set of size t < n/2 participants learns any additional information other than what they could derive from their private inputs and the output of the protocol. We study unconditionally secure MPC protocols in the presence of a passive adversary in the trusted setup (‘semi-ideal’) model, in which the participants are supplied with some auxiliary information (which is random and independent from the participant inputs) ahead of the protocol execution (such information can be purchased as a “commodity” well before a run of the protocol). We present a new MPC protocol in the trusted setup model, which allows the adversary to corrupt an arbitrary number t < n of participants. Our protocol makes use of a novel subprotocol for converting an additive secret sharing over a field to a multiplicative secret sharing, and can be used to securely evaluate any n-variate polynomial G over a field F, with inputs restricted to non-zero elements of F. The communication complexity of our protocol is O(ℓ · n 2) field elements, where ℓ is the number of non-linear monomials in G. Previous protocols in the trusted setup model require communication proportional to the number of multiplications in an arithmetic circuit for G; thus, our protocol may offer savings over previous protocols for functions with a small number of monomials but a large number of multiplications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter describes the challenges of integrating new technologies with literacy education in pre-service primary teacher education in Australia. The authors describe the policy context and regulatory mechanisms controlling pre-service education, including a national set of professional standards for graduate teachers, a new national curriculum for school students, the introduction of high stakes national assessment for school students, and the looming threat of decontextualized back-to-the-basics professional entry tests for aspiring teachers. The chapter includes three case studies of the authors’ pedagogical practices that attempt to reframe conceptions of the literacy capabilities of pre-service teachers to reflect the complex and sophisticated requirements of teachers in contemporary schooling. The authors conclude the chapter with a discussion of the implications of these case studies as they illustrate the ways that pre-service teachers can be scaffolded and supported to develop creative capacity and critical awareness of the kinds of literacies required in the digital age despite restrictive regimes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent research at the Queensland University of Technology has investigated the structural and thermal behaviour of load bearing Light gauge Steel Frame (LSF) wall systems made of 1.15 mm G500 steel studs and varying plasterboard and insulation configurations (cavity and external insulation) using full scale fire tests. Suitable finite element models of LSF walls were then developed and validated by comparing with test results. In this study, the validated finite element models of LSF wall panels subject to standard fire conditions were used in a detailed parametric study to investigate the effects of important parameters such as steel grade and thickness, plasterboard screw spacing, plasterboard lateral restraint, insulation materials and load ratio on their performance under standard fire conditions. Suitable equations were proposed to predict the time–temperature profiles of LSF wall studs with eight different plasterboard-insulation configurations, and used in the finite element analyses. Finite element parametric studies produced extensive fire performance data for the LSF wall panels in the form of load ratio versus time and critical hot flange (failure) temperature curves for eight wall configurations. This data demonstrated the superior fire performance of externally insulated LSF wall panels made of different steel grades and thicknesses. It also led to the development of a set of equations to predict the important relationship between the load ratio and the critical hot flange temperature of LSF wall studs. Finally this paper proposes a simplified method to predict the fire resistance rating of LSF walls based on the two proposed set of equations for the load ratio–hot flange temperature and the time–temperature relationships.