606 resultados para dynamic methods
Resumo:
With the widespread of social media websites in the internet, and the huge number of users participating and generating infinite number of contents in these websites, the need for personalisation increases dramatically to become a necessity. One of the major issues in personalisation is building users’ profiles, which depend on many elements; such as the used data, the application domain they aim to serve, the representation method and the construction methodology. Recently, this area of research has been a focus for many researchers, and hence, the proposed methods are increasing very quickly. This survey aims to discuss the available user modelling techniques for social media websites, and to highlight the weakness and strength of these methods and to provide a vision for future work in user modelling in social media websites.
Resumo:
Organizational transformations reliant on successful ICT system developments (continue to) fail to deliver projected benefits even when contemporary governance models are applied rigorously. Modifications to traditional program, project and systems development management methods have produced little material improvement to successful transformation as they are unable to routinely address the complexity and uncertainty of dynamic alignment of IS investments and innovation. Complexity theory provides insight into why this phenomenon occurs and is used to develop a conceptualization of complexity in IS-driven organizational transformations. This research-in-progress aims to identify complexity formulations relevant to organizational transformation. Political/power based influences, interrelated business rules, socio-technical innovation, impacts on stakeholders and emergent behaviors are commonly considered as characterizing complexity while the proposed conceptualization accommodates these as connectivity, irreducibility, entropy and/or information gain in hierarchically approximation and scaling, number of states in a finite automata and/or dimension of attractor, and information and/or variety.
Resumo:
Porn studies researchers in the humanities have tended to use different research methods from those in social sciences. There has been surprisingly little conversation between the groups about methodology. This article presents a basic introduction to textual analysis and statistical analysis, aiming to provide for all porn studies researchers a familiarity with these two quite distinct traditions of data analysis. Comparing these two approaches, the article suggests that social science approaches are often strongly reliable – but can sacrifice validity to this end. Textual analysis is much less reliable, but has the capacity to be strongly valid. Statistical methods tend to produce a picture of human beings as groups, in terms of what they have in common, whereas humanities approaches often seek out uniqueness. Social science approaches have asked a more limited range of questions than have the humanities. The article ends with a call to mix up the kinds of research methods that are applied to various objects of study.
Resumo:
Purpose: This study investigated the effect of chemical conjugation of the amino acid L-leucine to the polysaccharide chitosan on the dispersibility and drug release pattern of a polymeric nanoparticle (NP)-based controlled release dry powder inhaler (DPI) formulation. Methods: A chemical conjugate of L-leucine with chitosan was synthesized and characterized by Infrared (IR) Spectroscopy, Nuclear Magnetic Resonance (NMR) Spectroscopy, Elemental Analysis and X-ray Photoelectron Spectroscopy (XPS). Nanoparticles of both chitosan and its conjugate were prepared by a water-in-oil emulsification – glutaraldehyde cross-linking method using the antihypertensive agent, diltiazem (Dz) hydrochloride as the model drug. The surface morphology and particle size distribution of the nanoparticles were determined by Scanning Electron Microscopy (SEM) and Dynamic Light Scattering (DLS). The dispersibility of the nanoparticle formulation was analysed by a Twin Stage Impinger (TSI) with a Rotahaler as the DPI device. Deposition of the particles in the different stages was determined by gravimetry and the amount of drug released was analysed by UV spectrophotometry. The release profile of the drug was studied in phosphate buffered saline at 37 ⁰C and analyzed by UV spectrophotometry. Results: The TSI study revealed that the fine particle fractions (FPF), as determined gravimetrically, for empty and drug-loaded conjugate nanoparticles were significantly higher than for the corresponding chitosan nanoparticles (24±1.2% and 21±0.7% vs 19±1.2% and 15±1.5% respectively; n=3, p<0.05). The FPF of drug-loaded chitosan and conjugate nanoparticles, in terms of the amount of drug determined spectrophotometrically, had similar values (21±0.7% vs 16±1.6%). After an initial burst, both chitosan and conjugate nanoparticles showed controlled release that lasted about 8 to 10 days, but conjugate nanoparticles showed twice as much total drug release compared to chitosan nanoparticles (~50% vs ~25%). Conjugate nanoparticles also showed significantly higher dug loading and entrapment efficiency than chitosan nanoparticles (conjugate: 20±1% & 46±1%, chitosan: 16±1% & 38±1%, n=3, p<0.05). Conclusion: Although L-leucine conjugation to chitosan increased dispersibility of formulated nanoparticles, the FPF values are still far from optimum. The particles showed a high level of initial burst release (chitosan, 16% and conjugate, 31%) that also will need further optimization.
Resumo:
This paper merges the analysis of a case history and the simplified theoretical model related to a rather singular phenomenon that may happen in rotating machinery. Starting from the first, a small industrial steam turbine experienced a very strange behavior during megawatt load. When the unit was approaching the maximum allowed power, the temperature of the babbitt metal of the pads of the thrust bearing showed constant increase with an unrecoverable drift. Bearing inspection showed that pad trailing edge had the typical aspect of electrical pitting. This kind of damage was not reparable and bearing pads had to replaced. This problem occurred several times in sequence and was solved only by adding further ground brushes to the shaft-line. Failure analysis indicated electrodischarge machining as the root fault. A specific model, able to take into consideration the effect of electrical pitting and loading capacity decreasing as a consequence of the damage of the babbitt metal, is proposed in the paper and shows that the phenomenon causes the irretrievable failure of the thrust bearing.
Resumo:
Buffeting response of a cable-stayed bridge under construction is investigated through wind tunnel tests and numerical simulations. Two configurations of the erection stage have been considered and compared in terms of dynamic response and internal forces using the results of the experimental aeroelastic models. Moreover the results of a numerical model able to simulate the simultaneous effects of vortex shedding from tower and aeroelastic response of the deck are compared to the wind tunnel ones.
Resumo:
The Bluetooth technology is being increasingly used, among the Automated Vehicle Identification Systems, to retrieve important information about urban networks. Because the movement of Bluetooth-equipped vehicles can be monitored, throughout the network of Bluetooth sensors, this technology represents an effective means to acquire accurate time dependant Origin Destination information. In order to obtain reliable estimations, however, a number of issues need to be addressed, through data filtering and correction techniques. Some of the main challenges inherent to Bluetooth data are, first, that Bluetooth sensors may fail to detect all of the nearby Bluetooth-enabled vehicles. As a consequence, the exact journey for some vehicles may become a latent pattern that will need to be estimated. Second, sensors that are in close proximity to each other may have overlapping detection areas, thus making the task of retrieving the correct travelled path even more challenging. The aim of this paper is twofold: to give an overview of the issues inherent to the Bluetooth technology, through the analysis of the data available from the Bluetooth sensors in Brisbane; and to propose a method for retrieving the itineraries of the individual Bluetooth vehicles. We argue that estimating these latent itineraries, accurately, is a crucial step toward the retrieval of accurate dynamic Origin Destination Matrices.
Resumo:
Voltage unbalance is a major power quality problem in low voltage residential feeders due to the random location and rating of single-phase rooftop photovoltaic cells (PV). In this paper, two different improvement methods based on the application of series (DVR) and parallel (DSTATCOM) custom power devices are investigated to improve the voltage unbalance problem in these feeders. First, based on the load flow analysis carried out in MATLAB, the effectiveness of these two custom power devices is studied vis-à-vis the voltage unbalance reduction in urban and semi-urban/rural feeders containing rooftop PVs. Their effectiveness is studied from the installation location and rating points of view. Later, a Monte Carlo based stochastic analysis is carried out to investigate their efficacy for different uncertainties of load and PV rating and location in the network. After the numerical analyses, a converter topology and control algorithm is proposed for the DSTATCOM and DVR for balancing the network voltage at their point of common coupling. A state feedback control, based on pole-shift technique, is developed to regulate the voltage in the output of the DSTATCOM and DVR converters such that the voltage balancing is achieved in the network. The dynamic feasibility of voltage unbalance and profile improvement in LV feeders, by the proposed structure and control algorithm for the DSTATCOM and DVR, is verified through detailed PSCAD/EMTDC simulations.
Resumo:
The fastest-growing segment of jobs in the creative sector are in those firms that provide creative services to other sectors (Hearn, Goldsmith, Bridgstock, Rodgers 2014, this volume; Cunningham 2014, this volume). There are also a large number of Creative Services (Architecture and Design, Advertising and Marketing, Software and Digital Content occupations) workers embedded in organizations in other industry sectors (Cunningham and Higgs 2009). Ben Goldsmith (2014, this volume) shows, for example, that the Financial Services sector is the largest employer of digital creative talent in Australia. But why should this be? We argue it is because ‘knowledge-based intangibles are increasingly the source of value creation and hence of sustainable competitive advantage (Mudambi 2008, 186). This value creation occurs primarily at the research and development (R and D) and the marketing ends of the supply chain. Both of these areas require strong creative capabilities in order to design for, and to persuade, consumers. It is no surprise that Jess Rodgers (2014, this volume), in a study of Australia’s Manufacturing sector, found designers and advertising and marketing occupations to be the most numerous creative occupations. Greg Hearn and Ruth Bridgstock (2013, forthcoming) suggest ‘the creative heart of the creative economy […] is the social and organisational routines that manage the generation of cultural novelty, both tacit and codified, internal and external, and [cultural novelty’s] combination with other knowledges […] produce and capture value’. 2 Moreover, the main “social and organisational routine” is usually a team (for example, Grabher 2002; 2004).
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
A large number of methods have been published that aim to evaluate various components of multi-view geometry systems. Most of these have focused on the feature extraction, description and matching stages (the visual front end), since geometry computation can be evaluated through simulation. Many data sets are constrained to small scale scenes or planar scenes that are not challenging to new algorithms, or require special equipment. This paper presents a method for automatically generating geometry ground truth and challenging test cases from high spatio-temporal resolution video. The objective of the system is to enable data collection at any physical scale, in any location and in various parts of the electromagnetic spectrum. The data generation process consists of collecting high resolution video, computing accurate sparse 3D reconstruction, video frame culling and down sampling, and test case selection. The evaluation process consists of applying a test 2-view geometry method to every test case and comparing the results to the ground truth. This system facilitates the evaluation of the whole geometry computation process or any part thereof against data compatible with a realistic application. A collection of example data sets and evaluations is included to demonstrate the range of applications of the proposed system.
Resumo:
Aims: To compare different methods for identifying alcohol involvement in injury-related emergency department presentation in Queensland youth, and to explore the alcohol terminology used in triage text. Methods: Emergency Department Information System data were provided for patients aged 12-24 years with an injury-related diagnosis code for a 5 year period 2006-2010 presenting to a Queensland emergency department (N=348895). Three approaches were used to estimate alcohol involvement: 1) analysis of coded data, 2) mining of triage text, and 3) estimation using an adaptation of alcohol attributable fractions (AAF). Cases were identified as ‘alcohol-involved’ by code and text, as well as AAF weighted. Results: Around 6.4% of these injury presentations overall had some documentation of alcohol involvement, with higher proportions of alcohol involvement documented for 18-24 year olds, females, indigenous youth, where presentations occurred on a Saturday or Sunday, and where presentations occurred between midnight and 5am. The most common alcohol terms identified for all subgroups were generic alcohol terms (eg. ETOH or alcohol) with almost half of the cases where alcohol involvement was documented having a generic alcohol term recorded in the triage text. Conclusions: Emergency department data is a useful source of information for identification of high risk sub-groups to target intervention opportunities, though it is not a reliable source of data for incidence or trend estimation in its current unstandardised form. Improving the accuracy and consistency of identification, documenting and coding of alcohol-involvement at the point of data capture in the emergency department is the most desirable long term approach to produce a more solid evidence base to support policy and practice in this field.
Resumo:
The support for typically out-of-vocabulary query terms such as names, acronyms, and foreign words is an important requirement of many speech indexing applications. However, to date many unrestricted vocabulary indexing systems have struggled to provide a balance between good detection rate and fast query speeds. This paper presents a fast and accurate unrestricted vocabulary speech indexing technique named Dynamic Match Lattice Spotting (DMLS). The proposed method augments the conventional lattice spotting technique with dynamic sequence matching, together with a number of other novel algorithmic enhancements, to obtain a system that is capable of searching hours of speech in seconds while maintaining excellent detection performance
Resumo:
Spreading cell fronts play an essential role in many physiological processes. Classically, models of this process are based on the Fisher-Kolmogorov equation; however, such continuum representations are not always suitable as they do not explicitly represent behaviour at the level of individual cells. Additionally, many models examine only the large time asymptotic behaviour, where a travelling wave front with a constant speed has been established. Many experiments, such as a scratch assay, never display this asymptotic behaviour, and in these cases the transient behaviour must be taken into account. We examine the transient and asymptotic behaviour of moving cell fronts using techniques that go beyond the continuum approximation via a volume-excluding birth-migration process on a regular one-dimensional lattice. We approximate the averaged discrete results using three methods: (i) mean-field, (ii) pair-wise, and (iii) one-hole approximations. We discuss the performace of these methods, in comparison to the averaged discrete results, for a range of parameter space, examining both the transient and asymptotic behaviours. The one-hole approximation, based on techniques from statistical physics, is not capable of predicting transient behaviour but provides excellent agreement with the asymptotic behaviour of the averaged discrete results, provided that cells are proliferating fast enough relative to their rate of migration. The mean-field and pair-wise approximations give indistinguishable asymptotic results, which agree with the averaged discrete results when cells are migrating much more rapidly than they are proliferating. The pair-wise approximation performs better in the transient region than does the mean-field, despite having the same asymptotic behaviour. Our results show that each approximation only works in specific situations, thus we must be careful to use a suitable approximation for a given system, otherwise inaccurate predictions could be made.