870 resultados para objective
Resumo:
Macroeconomic policy makers are typically concerned with several indicators of economic performance. We thus propose to tackle the design of macroeconomic policy using Multicriteria Decision Making (MCDM) techniques. More specifically, we employ Multiobjective Programming (MP) to seek so-called efficient policies. The MP approach is combined with a computable general equilibrium (CGE) model. We chose use of a CGE model since they have the dual advantage of being consistent with standard economic theory while allowing one to measure the effect(s) of a specific policy with real data. Applying the proposed methodology to Spain (via the 1995 Social Accounting Matrix) we first quantified the trade-offs between two specific policy objectives: growth and inflation, when designing fiscal policy. We then constructed a frontier of efficient policies involving real growth and inflation. In doing so, we found that policy in 1995 Spain displayed some degree of inefficiency with respect to these two policy objectives. We then offer two sets of policy recommendations that, ostensibly, could have helped Spain at the time. The first deals with efficiency independent of the importance given to both growth and inflation by policy makers (we label this set: general policy recommendations). A second set depends on which policy objective is seen as more important by policy makers: increasing growth or controlling inflation (we label this one: objective-specific recommendations).
Resumo:
Object recognition has long been a core problem in computer vision. To improve object spatial support and speed up object localization for object recognition, generating high-quality category-independent object proposals as the input for object recognition system has drawn attention recently. Given an image, we generate a limited number of high-quality and category-independent object proposals in advance and used as inputs for many computer vision tasks. We present an efficient dictionary-based model for image classification task. We further extend the work to a discriminative dictionary learning method for tensor sparse coding. In the first part, a multi-scale greedy-based object proposal generation approach is presented. Based on the multi-scale nature of objects in images, our approach is built on top of a hierarchical segmentation. We first identify the representative and diverse exemplar clusters within each scale. Object proposals are obtained by selecting a subset from the multi-scale segment pool via maximizing a submodular objective function, which consists of a weighted coverage term, a single-scale diversity term and a multi-scale reward term. The weighted coverage term forces the selected set of object proposals to be representative and compact; the single-scale diversity term encourages choosing segments from different exemplar clusters so that they will cover as many object patterns as possible; the multi-scale reward term encourages the selected proposals to be discriminative and selected from multiple layers generated by the hierarchical image segmentation. The experimental results on the Berkeley Segmentation Dataset and PASCAL VOC2012 segmentation dataset demonstrate the accuracy and efficiency of our object proposal model. Additionally, we validate our object proposals in simultaneous segmentation and detection and outperform the state-of-art performance. To classify the object in the image, we design a discriminative, structural low-rank framework for image classification. We use a supervised learning method to construct a discriminative and reconstructive dictionary. By introducing an ideal regularization term, we perform low-rank matrix recovery for contaminated training data from all categories simultaneously without losing structural information. A discriminative low-rank representation for images with respect to the constructed dictionary is obtained. With semantic structure information and strong identification capability, this representation is good for classification tasks even using a simple linear multi-classifier.
Resumo:
Objective Structured Clinical Examinations (OSCE) improved communication skills of student of Pharmacology in Medicine and Podiatry degree. Bellido I, Blanco E, Gomez-Luque A. D. Pharmacology and Clinical Therapeutic. Medicine School. University of Malaga. IBIMA. Malaga, Spain. Objective Structured Clinical Examinations (OSCEs) are versatile multipurpose evaluative tools that can be utilized to assess health care professionals in a clinical setting including communication skills and ability to handle unpredictable patient behavior, which usually are not included in the traditional clinical exam. To designee and perform OSCEs by student is a novelty that really like to the students and may improve their arguing and planning capacities and their communication skills. Aim: To evaluate the impact of designing, developing and presenting Objective Structured Clinical Examinations (OSCE) by student in the communication skills development and in the learning of medicines in Medicine and Podiatry undergraduate students. Methods: A one-year study in which students were invited to voluntarily form groups (4 students maximum). Each group has to design and perform an OSCE (10 min maximum) showing a clinical situation/problem in which medicines’ use was needed. A clinical history, camera, a mobile-phone's video editor, photos, actors, dolls, simulators or whatever they may use was allowed. The job of each group was supervised and helped by a teacher. The students were invited to present their work to the rest of the class. After each OSCE performance the students were encouraged to ask questions if they wanted to do it. After all the OSCEs performances the students voluntarily answered a satisfaction survey. Results: Students of Pharmacology of Medicine degree and Podiatry degree, N=80, 53.75% female, 21±2.3 years old were enrolled. 26 OSCEs showing a clinical situation or clinical problem were made. The average time spent by students in making the OSCE was 21.5±9 h. The percentage of students which were satisfied with this way of presentation of the OSCE was 89.7%. Conclusion: Objective Structured Clinical Examinations (OSCE) designed and performed by student of Pharmacology of the Medicine and Podiatry Degree improved their communication skills.
Resumo:
Ligand-protein docking is an optimization problem based on predicting the position of a ligand with the lowest binding energy in the active site of the receptor. Molecular docking problems are traditionally tackled with single-objective, as well as with multi-objective approaches, to minimize the binding energy. In this paper, we propose a novel multi-objective formulation that considers: the Root Mean Square Deviation (RMSD) difference in the coordinates of ligands and the binding (intermolecular) energy, as two objectives to evaluate the quality of the ligand-protein interactions. To determine the kind of Pareto front approximations that can be obtained, we have selected a set of representative multi-objective algorithms such as NSGA-II, SMPSO, GDE3, and MOEA/D. Their performances have been assessed by applying two main quality indicators intended to measure convergence and diversity of the fronts. In addition, a comparison with LGA, a reference single-objective evolutionary algorithm for molecular docking (AutoDock) is carried out. In general, SMPSO shows the best overall results in terms of energy and RMSD (value lower than 2A for successful docking results). This new multi-objective approach shows an improvement over the ligand-protein docking predictions that could be promising in in silico docking studies to select new anticancer compounds for therapeutic targets that are multidrug resistant.
Resumo:
Objective: This study aims at determining if a collection of 16 motor tests on a physical simulator can objectively discriminate and evaluate practitioners' competency level, i.e. novice, resident, and expert. Methods: An experimental design with three study groups (novice, resident, and expert) was developed to test the evaluation power of each of the 16 simple tests. An ANOVA and a Student Newman-Keuls (SNK) test were used to analyze results of each test to determine which of them can discriminate participants' competency level. Results: Four of the 16 tests used discriminated all of the three competency levels and 15 discriminated at least two of the three groups (α= 0.05). Moreover, other two tests differentiate beginners\' level from intermediate, and other seven tests differentiate intermediate level from expert. Conclusion: The competency level of a practitioner of minimally invasive surgery can be evaluated by a specific collection of basic tests in a physical surgical simulator. Reduction of the number of tests needed to discriminate the competency level of surgeons can be the aim of future research.
Resumo:
Technologies for Big Data and Data Science are receiving increasing research interest nowadays. This paper introduces the prototyping architecture of a tool aimed to solve Big Data Optimization problems. Our tool combines the jMetal framework for multi-objective optimization with Apache Spark, a technology that is gaining momentum. In particular, we make use of the streaming facilities of Spark to feed an optimization problem with data from different sources. We demonstrate the use of our tool by solving a dynamic bi-objective instance of the Traveling Salesman Problem (TSP) based on near real-time traffic data from New York City, which is updated several times per minute. Our experiment shows that both jMetal and Spark can be integrated providing a software platform to deal with dynamic multi-optimization problems.
Resumo:
The usage of multi material structures in industry, especially in the automotive industry are increasing. To overcome the difficulties in joining these structures, adhesives have several benefits over traditional joining methods. Therefore, accurate simulations of the entire process of fracture including the adhesive layer is crucial. In this paper, material parameters of a previously developed meso mechanical finite element (FE) model of a thin adhesive layer are optimized using the Strength Pareto Evolutionary Algorithm (SPEA2). Objective functions are defined as the error between experimental data and simulation data. The experimental data is provided by previously performed experiments where an adhesive layer was loaded in monotonically increasing peel and shear. Two objective functions are dependent on 9 model parameters (decision variables) in total and are evaluated by running two FEsimulations, one is loading the adhesive layer in peel and the other in shear. The original study converted the two objective functions into one function that resulted in one optimal solution. In this study, however, a Pareto frontis obtained by employing the SPEA2 algorithm. Thus, more insight into the material model, objective functions, optimal solutions and decision space is acquired using the Pareto front. We compare the results and show good agreement with the experimental data.
Resumo:
OBJECTIVE: To evaluate the scored Patient-generated Subjective Global Assessment (PG-SGA) tool as an outcome measure in clinical nutrition practice and determine its association with quality of life (QoL). DESIGN: A prospective 4 week study assessing the nutritional status and QoL of ambulatory patients receiving radiation therapy to the head, neck, rectal or abdominal area. SETTING: Australian radiation oncology facilities. SUBJECTS: Sixty cancer patients aged 24-85 y. INTERVENTION: Scored PG-SGA questionnaire, subjective global assessment (SGA), QoL (EORTC QLQ-C30 version 3). RESULTS: According to SGA, 65.0% (39) of subjects were well-nourished, 28.3% (17) moderately or suspected of being malnourished and 6.7% (4) severely malnourished. PG-SGA score and global QoL were correlated (r=-0.66, P<0.001) at baseline. There was a decrease in nutritional status according to PG-SGA score (P<0.001) and SGA (P<0.001); and a decrease in global QoL (P<0.001) after 4 weeks of radiotherapy. There was a linear trend for change in PG-SGA score (P<0.001) and change in global QoL (P=0.003) between those patients who improved (5%) maintained (56.7%) or deteriorated (33.3%) in nutritional status according to SGA. There was a correlation between change in PG-SGA score and change in QoL after 4 weeks of radiotherapy (r=-0.55, P<0.001). Regression analysis determined that 26% of the variation of change in QoL was explained by change in PG-SGA (P=0.001). CONCLUSION: The scored PG-SGA is a nutrition assessment tool that identifies malnutrition in ambulatory oncology patients receiving radiotherapy and can be used to predict the magnitude of change in QoL.
Resumo:
OBJECTIVE: To compare, in patients with cancer and in healthy subjects, measured resting energy expenditure (REE) from traditional indirect calorimetry to a new portable device (MedGem) and predicted REE. DESIGN: Cross-sectional clinical validation study. SETTING: Private radiation oncology centre, Brisbane, Australia. SUBJECTS: Cancer patients (n = 18) and healthy subjects (n = 17) aged 37-86 y, with body mass indices ranging from 18 to 42 kg/m(2). INTERVENTIONS: Oxygen consumption (VO(2)) and REE were measured by VMax229 (VM) and MedGem (MG) indirect calorimeters in random order after a 12-h fast and 30-min rest. REE was also calculated from the MG without adjustment for nitrogen excretion (MGN) and estimated from Harris-Benedict prediction equations. Data were analysed using the Bland and Altman approach, based on a clinically acceptable difference between methods of 5%. RESULTS: The mean bias (MGN-VM) was 10% and limits of agreement were -42 to 21% for cancer patients; mean bias -5% with limits of -45 to 35% for healthy subjects. Less than half of the cancer patients (n = 7, 46.7%) and only a third (n = 5, 33.3%) of healthy subjects had measured REE by MGN within clinically acceptable limits of VM. Predicted REE showed a mean bias (HB-VM) of -5% for cancer patients and 4% for healthy subjects, with limits of agreement of -30 to 20% and -27 to 34%, respectively. CONCLUSIONS: Limits of agreement for the MG and Harris Benedict equations compared to traditional indirect calorimetry were similar but wide, indicating poor clinical accuracy for determining the REE of individual cancer patients and healthy subjects.
Resumo:
Language is a unique aspect of human communication because it can be used to discuss itself in its own terms. For this reason, human societies potentially have superior capacities of co-ordination, reflexive self-correction, and innovation than other animal, physical or cybernetic systems. However, this analysis also reveals that language is interconnected with the economically and technologically mediated social sphere and hence is vulnerable to abstraction, objectification, reification, and therefore ideology – all of which are antithetical to its reflexive function, whilst paradoxically being a fundamental part of it. In particular, in capitalism, language is increasingly commodified within the social domains created and affected by ubiquitous communication technologies. The advent of the so-called ‘knowledge economy’ implicates exchangeable forms of thought (language) as the fundamental commodities of this emerging system. The historical point at which a ‘knowledge economy’ emerges, then, is the critical point at which thought itself becomes a commodified ‘thing’, and language becomes its “objective” means of exchange. However, the processes by which such commodification and objectification occurs obscures the unique social relations within which these language commodities are produced. The latest economic phase of capitalism – the knowledge economy – and the obfuscating trajectory which accompanies it, we argue, is destroying the reflexive capacity of language particularly through the process of commodification. This can be seen in that the language practices that have emerged in conjunction with digital technologies are increasingly non-reflexive and therefore less capable of self-critical, conscious change.
Resumo:
The next phase envisioned for the World Wide Web is automated ad-hoc interaction between intelligent agents, web services, databases and semantic web enabled applications. Although at present this appears to be a distant objective, there are practical steps that can be taken to advance the vision. We propose an extension to classical conceptual models to allow the definition of application components in terms of public standards and explicit semantics, thus building into web-based applications, the foundation for shared understanding and interoperability. The use of external definitions and the need to store outsourced type information internally, brings to light the issue of object identity in a global environment, where object instances may be identified by multiple externally controlled identification schemes. We illustrate how traditional conceptual models may be augmented to recognise and deal with multiple identities.
Resumo:
Two distinct maintenance-data-models are studied: a government Enterprise Resource Planning (ERP) maintenance-data-model, and the Software Engineering Industries (SEI) maintenance-data-model. The objective is to: (i) determine whether the SEI maintenance-data-model is sufficient in the context of ERP (by comparing with an ERP case), (ii) identify whether the ERP maintenance-data-model in this study has adequately captured the essential and common maintenance attributes (by comparing with the SEI), and (iii) proposed a new ERP maintenance-data-model as necessary. Our findings suggest that: (i) there are variations to the SEI model in an ERP-context, and (ii) there are rooms for improvements in our ERP case’s maintenance-data-model. Thus, a new ERP maintenance-data-model capturing the fundamental ERP maintenance attributes is proposed. This model is imperative for: (i) enhancing the reporting and visibility of maintenance activities, (ii) monitoring of the maintenance problems, resolutions and performance, and (iii) helping maintenance manager to better manage maintenance activities and make well-informed maintenance decisions.
Resumo:
Previous work by Professor John Frazer on Evolutionary Architecture provides a basis for the development of a system evolving architectural envelopes in a generic and abstract manner. Recent research by the authors has focused on the implementation of a virtual environment for the automatic generation and exploration of complex forms and architectural envelopes based on solid modelling techniques and the integration of evolutionary algorithms, enhanced computational and mathematical models. Abstract data types are introduced for genotypes in a genetic algorithm order to develop complex models using generative and evolutionary computing techniques. Multi-objective optimisation techniques are employed for defining the fitness function in the evaluation process.
Resumo:
Bone graft is generally considered fundamental in achieving solid fusion in scoliosis correction and pseudarthrosis following instrumentation may predispose to implant failure. In endoscopic anterior-instrumented scoliosis surgery, autologous rib or iliac crest graft has been utilised traditionally but both techniques increase operative duration and cause donor site morbidity. Allograft bone and bone- morphogenetic-protein alternatives may improve fusion rates but this remains controversial. This study's objective was to compare two-year postoperative fusion rates in a series of patients who underwent endoscopic anterior instrumentation for thoracic scoliosis utilising various bone graft types. Significantly better rates of fusion occurred in endoscopic anterior instrumented scoliosis correction using femoral allograft compared to autologous rib-heads and iliac crest graft. This may be partly explained by the difficulty obtaining sufficient quantities of autologous graft. Lower fusion rates in the autologous graft group appeared to predispose to rod fracture although the clinical consequence of implant failure is uncertain.
Resumo:
OBJECTIVE The aim of this research project was to obtain an understanding of the barriers to and facilitators of providing palliative care in neonatal nursing. This article reports the first phase of this research: to develop and administer an instrument to measure the attitudes of neonatal nurses to palliative care. METHODS The instrument developed for this research (the Neonatal Palliative Care Attitude Scale) underwent face and content validity testing with an expert panel and was pilot tested to establish temporal stability. It was then administered to a population sample of 1285 neonatal nurses in Australian NICUs, with a response rate of 50% (N 645). Exploratory factor-analysis techniques were conducted to identify scales and subscales of the instrument. RESULTS Data-reduction techniques using principal components analysis were used. Using the criteria of eigenvalues being 1, the items in the Neonatal Palliative Care Attitude Scale extracted 6 factors, which accounted for 48.1% of the variance among the items. By further examining the questions within each factor and the Cronbach’s of items loading on each factor, factors were accepted or rejected. This resulted in acceptance of 3 factors indicating the barriers to and facilitators of palliative care practice. The constructs represented by these factors indicated barriers to and facilitators of palliative care practice relating to (1) the organization in which the nurse practices, (2) the available resources to support a palliative model of care, and (3) the technological imperatives and parental demands. CONCLUSIONS The subscales identified by this analysis identified items that measured both barriers to and facilitators of palliative care practice in neonatal nursing. While establishing preliminary reliability of the instrument by using exploratory factor-analysis techniques, further testing of this instrument with different samples of neonatal nurses is necessary using a confirmatory factor-analysis approach.