292 resultados para RM extended algorithm
Resumo:
This paper presents an extended granule mining based methodology, to effectively describe the relationships between granules not only by traditional support and confidence, but by diversity and condition diversity as well. Diversity measures how diverse of a granule associated with the other granules, it provides a kind of novel knowledge in databases. We also provide an algorithm to implement the proposed methodology. The experiments conducted to characterize a real network traffic data collection show that the proposed concepts and algorithm are promising.
Resumo:
An extended theory of planned behavior (TPB) was used to predict young people’s intentions to donate money to charities in the future. Students (N = 210; 18-24 years) completed a questionnaire assessing their attitude, subjective norm, perceived behavioral control [PBC], moral obligation, past behavior and intentions toward donating money. Regression analyses revealed the extended TPB explained 61% of the variance in intentions to donate money. Attitude, PBC, moral norm, and past behavior predicted intentions, representing future targets for charitable giving interventions.
Resumo:
The Wright-Fisher model is an Itô stochastic differential equation that was originally introduced to model genetic drift within finite populations and has recently been used as an approximation to ion channel dynamics within cardiac and neuronal cells. While analytic solutions to this equation remain within the interval [0,1], current numerical methods are unable to preserve such boundaries in the approximation. We present a new numerical method that guarantees approximations to a form of Wright-Fisher model, which includes mutation, remain within [0,1] for all time with probability one. Strong convergence of the method is proved and numerical experiments suggest that this new scheme converges with strong order 1/2. Extending this method to a multidimensional case, numerical tests suggest that the algorithm still converges strongly with order 1/2. Finally, numerical solutions obtained using this new method are compared to those obtained using the Euler-Maruyama method where the Wiener increment is resampled to ensure solutions remain within [0,1].
Resumo:
Recently, Software as a Service (SaaS) in Cloud computing, has become more and more significant among software users and providers. To offer a SaaS with flexible functions at a low cost, SaaS providers have focused on the decomposition of the SaaS functionalities, or known as composite SaaS. This approach has introduced new challenges in SaaS resource management in data centres. One of the challenges is managing the resources allocated to the composite SaaS. Due to the dynamic environment of a Cloud data centre, resources that have been initially allocated to SaaS components may be overloaded or wasted. As such, reconfiguration for the components’ placement is triggered to maintain the performance of the composite SaaS. However, existing approaches often ignore the communication or dependencies between SaaS components in their implementation. In a composite SaaS, it is important to include these elements, as they will directly affect the performance of the SaaS. This paper will propose a Grouping Genetic Algorithm (GGA) for multiple composite SaaS application component clustering in Cloud computing that will address this gap. To the best of our knowledge, this is the first attempt to handle multiple composite SaaS reconfiguration placement in a dynamic Cloud environment. The experimental results demonstrate the feasibility and the scalability of the GGA.
Resumo:
Information security has been recognized as a core requirement for corporate governance that is expected to facilitate not only the management of risks, but also as a corporate enabler that supports and contributes to the sustainability of organizational operations. In implementing information security, the enterprise information security policy is the set of principles and strategies that guide the course of action for the security activities and may be represented as a brief statement that defines program goals and sets information security and risk requirements. The enterprise information security policy (alternatively referred to as security policy in this paper) that represents the meta-policy of information security is an element of corporate ICT governance and is derived from the strategic requirements for risk management and corporate governance. Consistent alignment between the security policy and the other corporate business policies and strategies has to be maintained if information security is to be implemented according to evolving business objectives. This alignment may be facilitated by managing security policy alongside other corporate business policies within the strategic management cycle. There are however limitations in current approaches for developing and managing the security policy to facilitate consistent strategic alignment. This paper proposes a conceptual framework for security policy management by presenting propositions to positively affect security policy alignment with business policies and prescribing a security policy management approach that expounds on the propositions.
Resumo:
For many years, computer vision has lured researchers with promises of a low-cost, passive, lightweight and information-rich sensor suitable for navigation purposes. The prime difficulty in vision-based navigation is that the navigation solution will continually drift with time unless external information is available, whether it be cues from the appearance of the scene, a map of features (whether built online or known a priori), or from an externally-referenced sensor. It is not merely position that is of interest in the navigation problem. Attitude (i.e. the angular orientation of a body with respect to a reference frame) is integral to a visionbased navigation solution and is often of interest in its own right (e.g. flight control). This thesis examines vision-based attitude estimation in an aerospace environment, and two methods are proposed for constraining drift in the attitude solution; one through a novel integration of optical flow and the detection of the sky horizon, and the other through a loosely-coupled integration of Visual Odometry and GPS position measurements. In the first method, roll angle, pitch angle and the three aircraft body rates are recovered though a novel method of tracking the horizon over time and integrating the horizonderived attitude information with optical flow. An image processing front-end is used to select several candidate lines in a image that may or may not correspond to the true horizon, and the optical flow is calculated for each candidate line. Using an Extended Kalman Filter (EKF), the previously estimated aircraft state is propagated using a motion model and a candidate horizon line is associated using a statistical test based on the optical flow measurements and location of the horizon in the image. Once associated, the selected horizon line, along with the associated optical flow, is used as a measurement to the EKF. To evaluate the accuracy of the algorithm, two flights were conducted, one using a highly dynamic Uninhabited Airborne Vehicle (UAV) in clear flight conditions and the other in a human-piloted Cessna 172 in conditions where the horizon was partially obscured by terrain, haze and smoke. The UAV flight resulted in pitch and roll error standard deviations of 0.42° and 0.71° respectively when compared with a truth attitude source. The Cessna 172 flight resulted in pitch and roll error standard deviations of 1.79° and 1.75° respectively. In the second method for estimating attitude, a novel integrated GPS/Visual Odometry (GPS/VO) navigation filter is proposed, using a structure similar to a classic looselycoupled GPS/INS error-state navigation filter. Under such an arrangement, the error dynamics of the system are derived and a Kalman Filter is developed for estimating the errors in position and attitude. Through similar analysis to the GPS/INS problem, it is shown that the proposed filter is capable of recovering the complete attitude (i.e. pitch, roll and yaw) of the platform when subjected to acceleration not parallel to velocity for both the monocular and stereo variants of the filter. Furthermore, it is shown that under general straight line motion (e.g. constant velocity), only the component of attitude in the direction of motion is unobservable. Numerical simulations are performed to demonstrate the observability properties of the GPS/VO filter in both the monocular and stereo camera configurations. Furthermore, the proposed filter is tested on imagery collected using a Cessna 172 to demonstrate the observability properties on real-world data. The proposed GPS/VO filter does not require additional restrictions or assumptions such as platform-specific dynamics, map-matching, feature-tracking, visual loop-closing, gravity vector or additional sensors such as an IMU or magnetic compass. Since no platformspecific dynamics are required, the proposed filter is not limited to the aerospace domain and has the potential to be deployed in other platforms such as ground robots or mobile phones.
Resumo:
Purpose: Silicone hydrogel contact lenses (CLs) are becoming increasingly popular for daily wear (DW), extended wear (EW) and continuous wear (CW), due to their higher oxygen transmissibility compared to hydrogel CLs. The aim of this study was to investigate the clinical and subjective performance of asmofilcon A (Menicon Co., Ltd), a new surface treated silicone hydrogel CL, during 6-night EW over 6 months (M). Methods: A prospective, randomised, single-masked, monadic study was conducted. N=60 experienced DW soft CL wearers were randomly assigned to wear either asmofilcon A (test: Dk=129, water content (WC)=40%, Nanogloss surface treatment) or senofilcon A (control: Dk=103, WC=38%, PVP internal wetting agent, Vistakon, Johnson & Johnson Vision Care) CLs bilaterally for 6 M on an EW basis. A PHMB-preserved solution (Menicon Co., Ltd) was dispensed for CL care. Evaluations were conducted at CL delivery and after 1 week (W), 4 W, 3 M and 6 M of EW. At each visit, a range of objective and subjective clinical performance measures were assessed. Results: N=50 subjects (83%) successfully completed the study, with the majority of discontinuations due to loss to follow-up (n=3) or moving away/travel (n=5). N=2 subjects experienced adverse events; n=1 unilateral red eye with asmofilcon A and n=1 asymptomatic infiltrate with senofilcon A. There were no significant differences in high or low contrast distance visual acuity (HCDVA or LCDVA) between asmofilcon A and senofilcon A; however, LCDVA decreased significantly over time with both CL types (p<0.05). The two CL types did not vary significantly with respect to any of the objective and subjective measures assessed (p>0.05); CL fitting characteristics and CL surface measurements were very similar and mean bulbar and limbal redness measures were always less than grade 1.0. Superior palpebral conjunctival injection showed a statistically, but not clinically, significant increase over time with both CL types (p<0.05). Corneal staining did not vary significantly between asmofilcon A and senofilcon A (p>0.05), with low median gradings of less than 0.5 observed for all areas assessed. There were no solution-related staining reactions observed with either CL type. The asmofilcon A and senofilcon A CLs were both rated highly with respect to overall comfort, with medians of 14 or 15 hours of comfortable lens wearing time per day reported at each of the study visits (p>0.05). Conclusions: Over 6 months of EW, the asmofilcon A and senofilcon A CLs performed in a similar manner with respect to visual acuity, ocular health and CL performance measures. Some changes over time were observed with both CL types, including reduced LCDVA and increased superior palpebral injection, which warrant further investigation in longer-term EW studies. Asmofilcon A appeared to be equivalent in performance to senofilcon A.
Resumo:
A composite SaaS (Software as a Service) is a software that is comprised of several software components and data components. The composite SaaS placement problem is to determine where each of the components should be deployed in a cloud computing environment such that the performance of the composite SaaS is optimal. From the computational point of view, the composite SaaS placement problem is a large-scale combinatorial optimization problem. Thus, an Iterative Cooperative Co-evolutionary Genetic Algorithm (ICCGA) was proposed. The ICCGA can find reasonable quality of solutions. However, its computation time is noticeably slow. Aiming at improving the computation time, we propose an unsynchronized Parallel Cooperative Co-evolutionary Genetic Algorithm (PCCGA) in this paper. Experimental results have shown that the PCCGA not only has quicker computation time, but also generates better quality of solutions than the ICCGA.
Resumo:
Food and interaction design presents an interesting challenge to the HCI community in attending to the pervasive nature of food, the socio-cultural differences in food practices and a changing global foodscape. To design for meaningful and positive interactions it is essential to identify daily food practices and the opportunities for the design of technology to support such practices. This workshop brings together a community of researchers and practitioners in human-food interaction to attend to the practical and theoretical difficulties in designing for human-food interactions in everyday life. Through a practical field study and workshop we explore themes of food experiences, health and wellbeing, sustainability and alternative food cultures.
Resumo:
The Lockyer Valley in southeast Queensland, Australia, hosts an economically significant alluvial aquifer system which has been impacted by prolonged drought conditions (~1997 to ~ 2009). Throughout this time, the system was under continued groundwater extraction, resulting in severe aquifer depletion. By 2008, much of the aquifer was at <30% of storage but some relief occurred with rains in early 2009. However, between December 2010 and January 2011, most of southeast Queensland experienced unprecedented flooding, which generated significant aquifer recharge. In order to understand the spatial and temporal controls of groundwater recharge in the alluvium, a detailed 3D lithological property model of gravels, sands and clays was developed using GOCAD software. The spatial distribution of recharge throughout the catchment was assessed using hydrograph data from about 400 groundwater observation wells screened at the base of the alluvium. Water levels from these bores were integrated into a catchment-wide 3D geological model using the 3D geological modelling software GOCAD; the model highlights the complexity of recharge mechanisms. To support this analysis, groundwater tracers (e.g. major and minor ions, stable isotopes, 3H and 14C) were used as independent verification. The use of these complementary methods has allowed the identification of zones where alluvial recharge primarily occurs from stream water during episodic flood events. However, the study also demonstrates that in some sections of the alluvium, rainfall recharge and discharge from the underlying basement into the alluvium are the primary recharge mechanisms of the alluvium. This is indicated by the absence of any response to the flood, as well as the observed old radiocarbon ages and distinct basement water chemistry signatures at these locations. Within the 3D geological model, integration of water chemistry and time-series displays of water level surfaces before and after the flood suggests that the spatial variations of the flood response in the alluvium are primarily controlled by the valley morphology and lithological variations within the alluvium. The integration of time-series of groundwater level surfaces in the 3D geological model also enables the quantification of the volumetric change of groundwater stored in the unconfined sections of this alluvial aquifer during drought and following flood events. The 3D representation and analysis of hydraulic and recharge information has considerable advantages over the traditional 2D approach. For example, while many studies focus on singular aspects of catchment dynamics and groundwater-surface water interactions, the 3D approach is capable of integrating multiple types of information (topography, geological, hydraulic, water chemistry and spatial) into a single representation which provides valuable insights into the major factors controlling aquifer processes.
Resumo:
Quality oriented management systems and methods have become the dominant business and governance paradigm. From this perspective, satisfying customers’ expectations by supplying reliable, good quality products and services is the key factor for an organization and even government. During recent decades, Statistical Quality Control (SQC) methods have been developed as the technical core of quality management and continuous improvement philosophy and now are being applied widely to improve the quality of products and services in industrial and business sectors. Recently SQC tools, in particular quality control charts, have been used in healthcare surveillance. In some cases, these tools have been modified and developed to better suit the health sector characteristics and needs. It seems that some of the work in the healthcare area has evolved independently of the development of industrial statistical process control methods. Therefore analysing and comparing paradigms and the characteristics of quality control charts and techniques across the different sectors presents some opportunities for transferring knowledge and future development in each sectors. Meanwhile considering capabilities of Bayesian approach particularly Bayesian hierarchical models and computational techniques in which all uncertainty are expressed as a structure of probability, facilitates decision making and cost-effectiveness analyses. Therefore, this research investigates the use of quality improvement cycle in a health vii setting using clinical data from a hospital. The need of clinical data for monitoring purposes is investigated in two aspects. A framework and appropriate tools from the industrial context are proposed and applied to evaluate and improve data quality in available datasets and data flow; then a data capturing algorithm using Bayesian decision making methods is developed to determine economical sample size for statistical analyses within the quality improvement cycle. Following ensuring clinical data quality, some characteristics of control charts in the health context including the necessity of monitoring attribute data and correlated quality characteristics are considered. To this end, multivariate control charts from an industrial context are adapted to monitor radiation delivered to patients undergoing diagnostic coronary angiogram and various risk-adjusted control charts are constructed and investigated in monitoring binary outcomes of clinical interventions as well as postintervention survival time. Meanwhile, adoption of a Bayesian approach is proposed as a new framework in estimation of change point following control chart’s signal. This estimate aims to facilitate root causes efforts in quality improvement cycle since it cuts the search for the potential causes of detected changes to a tighter time-frame prior to the signal. This approach enables us to obtain highly informative estimates for change point parameters since probability distribution based results are obtained. Using Bayesian hierarchical models and Markov chain Monte Carlo computational methods, Bayesian estimators of the time and the magnitude of various change scenarios including step change, linear trend and multiple change in a Poisson process are developed and investigated. The benefits of change point investigation is revisited and promoted in monitoring hospital outcomes where the developed Bayesian estimator reports the true time of the shifts, compared to priori known causes, detected by control charts in monitoring rate of excess usage of blood products and major adverse events during and after cardiac surgery in a local hospital. The development of the Bayesian change point estimators are then followed in a healthcare surveillances for processes in which pre-intervention characteristics of patients are viii affecting the outcomes. In this setting, at first, the Bayesian estimator is extended to capture the patient mix, covariates, through risk models underlying risk-adjusted control charts. Variations of the estimator are developed to estimate the true time of step changes and linear trends in odds ratio of intensive care unit outcomes in a local hospital. Secondly, the Bayesian estimator is extended to identify the time of a shift in mean survival time after a clinical intervention which is being monitored by riskadjusted survival time control charts. In this context, the survival time after a clinical intervention is also affected by patient mix and the survival function is constructed using survival prediction model. The simulation study undertaken in each research component and obtained results highly recommend the developed Bayesian estimators as a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances as well as industrial and business contexts. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The empirical results and simulations indicate that the Bayesian estimators are a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The advantages of the Bayesian approach seen in general context of quality control may also be extended in the industrial and business domains where quality monitoring was initially developed.
Resumo:
This presentation presents a blended learning model that provides greater opportunity for learning to be self-managed and personalized.
Resumo:
The 2010 LAGI competition was held on three underutilized sites in the United Arab Emirates. By choosing Staten Island, New York in 2012 the competition organises have again brought into question new roles for public open space in the contemporary city. In the case of the UEA sites, the competition produced many entries which aimed to create a sculpture and by doing so, they attracted people to the selected empty spaces in an arid climate. In a way these proposals were the incubators and the new characters of these empty spaces. The competition was thus successful at advancing understandings of the expanded role of public open spaces in EAU and elsewhere. LAGI 2012 differs significantly to the UAE program because Fresh Kills Park has already been planned as a public open space for New Yorkers - with or without these clean energy sculptures. Furthermore, Fresh Kills Park is already an (gas) energy generating site in its own right. We believe Fresh Kills Park, as a site, presents a problem which somewhat transcends the aims of the competition brief. Advancing a sustainable urban design proposition for the site therefore requires a fundamental reconsideration of the established paradigms public open space. Hence our strategy is to not only create an energy generating, site specific art work, but to create synergy between the public and the site engagement while at the same time complement the idiosyncrasies of the pre-existing engineered landscape. Current PhD research about energy generation in public open spaces informs this work.