955 resultados para principal-agent-problem
Resumo:
Tumor Endothelial Marker-1 (TEM1/CD248) is a tumor vascular marker with high therapeutic and diagnostic potentials. Immuno-imaging with TEM1-specific antibodies can help to detect cancerous lesions, monitor tumor responses, and select patients that are most likely to benefit from TEM1-targeted therapies. In particular, near infrared(NIR) optical imaging with biomarker-specific antibodies can provide real-time, tomographic information without exposing the subjects to radioactivity. To maximize the theranostic potential of TEM1, we developed a panel of all human, multivalent Fc-fusion proteins based on a previously identified single chain antibody (scFv78) that recognizes both human and mouse TEM1. By characterizing avidity, stability, and pharmacokinectics, we identified one fusion protein, 78Fc, with desirable characteristics for immuno-imaging applications. The biodistribution of radiolabeled 78Fc showed that this antibody had minimal binding to normal organs, which have low expression of TEM1. Next, we developed a 78Fc-based tracer and tested its performance in different TEM1-expressing mouse models. The NIR imaging and tomography results suggest that the 78Fc-NIR tracer performs well in distinguishing mouse- or human-TEM1 expressing tumor grafts from normal organs and control grafts in vivo. From these results we conclude that further development and optimization of 78Fc as a TEM1-targeted imaging agent for use in clinical settings is warranted.
Resumo:
In the last decades, the globalized competition among cities and regions made them develop new strategies for branding and promoting their territory to attract tourists, investors, companies and residents. Major sports events - such as the Olympic Games, the FIFA World Cup or World and Continental Championships - have played an integral part in these strategies. Believing, with or without evidence, in the capacity of those events to improve the visibility and the economy of the host destination, many cities, regions and even countries have engaged in establishing sports events hosting strategies. The problem of the globalized competition in the sports events "market" is that many cities and regions do not have the resources - either financial, human or in terms of infrastructure - to compete in hosting major sports events. Consequently, many cities or regions have to turn to second-tier sports events. To organise those smaller events means less media coverage and more difficulty in finding sponsors, while the costs - both financial and in terms of services - stay high for the community. This paper analyses how Heritage Sporting Events (HSE) might be an opportunity for cities and regions engaged in sports events hosting strategies. HSE is an emerging concept that to date has been under-researched in the academic literature. Therefore, this paper aims to define the concept of HSE through an exploratory research study. A multidisciplinary literature review reveals two major characteristics of HSEs: the sustainability in the territory and the authenticity of the event constructed through a differentiation process. These characteristics, defined through multiple variables, give us the opportunity to observe the construction process of a sports event into a heritage object. This paper argues that HSEs can be seen as territorial resources that can represent a competitive advantage for host destinations. In conclusion, academics are invited to further research HSEs to better understand their construction process and their impacts on the territory, while local authorities are invited to consider HSEs for the branding and the promotion of their territory.
Resumo:
From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.
Resumo:
[Traditions. Asie. Inde. Province de Madras [i.e. Chennai]]
Resumo:
Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.
Resumo:
This study aimed at analyzing nipple trauma resulted from breastfeeding based on dermatological approach. Two integrative reviews of literature were conducted, the first related to definitions, classification and evaluation methods of nipple trauma and another about validation studies related to this theme. In the first part were included 20 studies and only one third defined nipple trauma, more than half did not defined the nipple’s injuries reported, and each author showed a particular way to assess the injuries, without consensus. In the second integrative review, no validation study or algorithm related to nipple trauma resulted from breastfeeding was found. This fact demonstrated that the nipple’s injuries mentioned in the first review did not go through validation studies, justifying the lack of consensus identified as far as definition, classification and assessment methods of nipple trauma.