77 resultados para Extremal Problems
Resumo:
Este artículo se centra en el análisis de la participación de los estudiantes en el gobierno de la universidad y muestra específicamente las principales dificultades que hay para ella y las propuestas que pueden facilitar la implicación de los estudiantes en el funcionamiento de las universidades. A partir de una investigación desarrollada durante los cursos 2007-08 y 2008-09, en la que se utilizaron cuestionarios y grupos de discusión con estudiantes y entrevistas dirigidas al profesorado, se obtiene información sobre los principales obstáculos para la participación estudiantil. El estudio realizado muestra que, de acuerdo con la tónica general reflejada en otras investigaciones sobre esta misma temática en nuestro contexto, la participación de los estudiantes en los distintos estamentos universitarios es escasa. Ahora bien, la metodología seguida en esta investigación permite contrastar las opiniones de los estudiantes con las percepciones del profesorado y obtener así matices significativos que muestran las principales direcciones que hay que tomar para facilitar un cambio de orientación en el asunto. Los cambios que debemos emprender están relacionados no solo con la mejora de los mecanismos de información acerca de los canales de participación, sino también con el replanteamiento de los procesos participativos por parte de la universidad, así como del papel del profesorado y, específicamente, de los coordinadores de los órganos de gestión más próximos a los estudiantes. En las conclusiones del documento se presentan las propuestas de mejora dirigidas a potenciar la implicación de los estudiantes en el funcionamiento universitario. Entre ellas se apuntan las siguientes: mejorar la información y los canales de comunicación con los estudiantes, mejorar los procesos electorales, ofrecer formación a los estudiantes para la participación y formación al profesorado acerca de las metodologías, recursos e instrumentos que pueden repercutir en la motivación de los estudiantes
Resumo:
We introduce a width parameter that bounds the complexity of classical planning problems and domains, along with a simple but effective blind-search procedure that runs in time that is exponential in the problem width. We show that many benchmark domains have a bounded and small width provided thatgoals are restricted to single atoms, and hence that such problems are provably solvable in low polynomial time. We then focus on the practical value of these ideas over the existing benchmarks which feature conjunctive goals. We show that the blind-search procedure can be used for both serializing the goal into subgoals and for solving the resulting problems, resulting in a ‘blind’ planner that competes well with a best-first search planner guided by state-of-the-art heuristics. In addition, ideas like helpful actions and landmarks can be integrated as well, producing a planner with state-of-the-art performance.
Resumo:
Abstract. In this paper we study the relative equilibria and their stability for a system of three point particles moving under the action of a Lennard{Jones potential. A central con guration is a special position of the particles where the position and acceleration vectors of each particle are proportional, and the constant of proportionality is the same for all particles. Since the Lennard{Jones potential depends only on the mutual distances among the particles, it is invariant under rotations. In a rotating frame the orbits coming from central con gurations become equilibrium points, the relative equilibria. Due to the form of the potential, the relative equilibria depend on the size of the system, that is, depend strongly of the momentum of inertia I. In this work we characterize the relative equilibria, we nd the bifurcation values of I for which the number of relative equilibria is changing, we also analyze the stability of the relative equilibria.
Resumo:
The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.
Resumo:
This paper deals with the goodness of the Gaussian assumption when designing second-order blind estimationmethods in the context of digital communications. The low- andhigh-signal-to-noise ratio (SNR) asymptotic performance of the maximum likelihood estimator—derived assuming Gaussiantransmitted symbols—is compared with the performance of the optimal second-order estimator, which exploits the actualdistribution of the discrete constellation. The asymptotic study concludes that the Gaussian assumption leads to the optimalsecond-order solution if the SNR is very low or if the symbols belong to a multilevel constellation such as quadrature-amplitudemodulation (QAM) or amplitude-phase-shift keying (APSK). On the other hand, the Gaussian assumption can yield importantlosses at high SNR if the transmitted symbols are drawn from a constant modulus constellation such as phase-shift keying (PSK)or continuous-phase modulations (CPM). These conclusions are illustrated for the problem of direction-of-arrival (DOA) estimation of multiple digitally-modulated signals.
Resumo:
A regularization method based on the non-extensive maximum entropy principle is devised. Special emphasis is given to the q=1/2 case. We show that, when the residual principle is considered as constraint, the q=1/2 generalized distribution of Tsallis yields a regularized solution for bad-conditioned problems. The so devised regularized distribution is endowed with a component which corresponds to the well known regularized solution of Tikhonov (1977).
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.
Resumo:
A method for dealing with monotonicity constraints in optimal control problems is used to generalize some results in the context of monopoly theory, also extending the generalization to a large family of principal-agent programs. Our main conclusion is that many results on diverse economic topics, achieved under assumptions of continuity and piecewise differentiability in connection with the endogenous variables of the problem, still remain valid after replacing such assumptions by two minimal requirements.
Resumo:
N = 1 designs imply repeated registrations of the behaviour of the same experimental unit and the measurements obtained are often few due to time limitations, while they are also likely to be sequentially dependent. The analytical techniques needed to enhance statistical and clinical decision making have to deal with these problems. Different procedures for analysing data from single-case AB designs are discussed, presenting their main features and revising the results reported by previous studies. Randomization tests represent one of the statistical methods that seemed to perform well in terms of controlling false alarm rates. In the experimental part of the study a new simulation approach is used to test the performance of randomization tests and the results suggest that the technique is not always robust against the violation of the independence assumption. Moreover, sensitivity proved to be generally unacceptably low for series lengths equal to 30 and 40. Considering the evidence available, there does not seem to be an optimal technique for single-case data analysis
Resumo:
Background: A holistic perspective on health implies giving careful consideration to the relationship between physical and mental health. In this regard the present study sought to determine the level of Positive Mental Health (PMH) among people with chronic physical health problems, and to examine the relationship between the observed levels of PMH and both physical health status and socio-demographic variables. Methods: The study was based on the Multifactor Model of Positive Mental Health (Lluch, 1999), which comprises six factors: Personal Satisfaction (F1), Prosocial Attitude (F2), Self-control (F3), Autonomy (F4), Problem-solving and Self-actualization (F5), and Interpersonal Relationship Skills (F6). The sample comprised 259 adults with chronic physical health problems who were recruited through a primary care center in the province of Barcelona (Spain). Positive mental health was assessed by means of the Positive Mental Health Questionnaire (Lluch, 1999). Results: Levels of PMH differed, either on the global scale or on specific factors, in relation to the following variables: age: global PMH scores decreased with age (r=-0.129; p=0.038); b) gender: men scored higher on F1 (t=2.203; p=0.028) and F4 (t=3.182; p=0.002), while women scored higher on F2 (t -3.086; p=0.002) and F6 (t=-2.744; p=0.007); c) number of health conditions: the fewer the number of health problems the higher the PMH score on F5 (r=-0.146; p=0.019); d) daily medication: polymedication patients had lower PMH scores, both globally and on various factors; e) use of analgesics: occasional use of painkillers was associated with higher PMH scores on F1 (t=-2.811; p=0.006). There were no significant differences in global PMH scores according to the type of chronic health condition. The only significant difference in the analysis by factors was that patients with hypertension obtained lower PMH scores on the factor Autonomy (t=2.165; p=0.032). Conclusions: Most people with chronic physical health problems have medium or high levels of PMH. The variables that adversely affect PMH are old age, polypharmacy and frequent consumption of analgesics. The type of health problem does not influence the levels of PMH. Much more extensive studies with samples without chronic pathology are now required in order to be able to draw more robust conclusions.
Resumo:
Little is known about how genetic and environmental factors contribute to the association between parental negativity and behavior problems from early childhood to adolescence. The current study fitted a cross-lagged model in a sample consisting of 4,075 twin pairs to explore (a) the role of genetic and environmental factors in the relationship between parental negativity and behavior problems from age 4 to age 12, (b) whether parent-driven and child-driven processes independently explain the association, and (c) whether there are sex differences in this relationship. Both phenotypes showed substantial genetic influence at both ages. The concurrent overlap between them was mainly accounted for by genetic factors. Causal pathways representing stability of the phenotypes and parent-driven and child-driven effects significantly and independently account for the association. Significant but slight differences were found between males and females for parent-driven effects. These results were highly similar when general cognitive ability was added as a covariate. In summary, the longitudinal association between parental negativity and behavior problems seems to be bidirectional and mainly accounted for by genetic factors. Furthermore, child-driven effects were mainly genetically mediated, and parent-driven effects were a function of both genetic and shared-environmental factors.
Resumo:
Reinsurance is one of the tools that an insurer can use to mitigate the underwriting risk and then to control its solvency. In this paper, we focus on the proportional reinsurance arrangements and we examine several optimization and decision problems of the insurer with respect to the reinsurance strategy. To this end, we use as decision tools not only the probability of ruin but also the random variable deficit at ruin if ruin occurs. The discounted penalty function (Gerber & Shiu, 1998) is employed to calculate as particular cases the probability of ruin and the moments and the distribution function of the deficit at ruin if ruin occurs.
Resumo:
The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.
Resumo:
An extension of the standard rationing model is introduced. Agents are not only identi fied by their respective claims over some amount of a scarce resource, but also by some payoff thresholds. These thresholds introduce exogenous differences among agents (full or partial priority, past allocations, past debts, ...) that may influence the final distribution. Within this framework we provide generalizations of the constrained equal awards rule and the constrained equal losses rule. We show that these generalized rules are dual from each other. We characterize the generalization of the equal awards rule by using the properties of consistency, path-independence and compensated exemption. Finally, we use the duality between rules to characterize the generalization of the equal losses solution.
Resumo:
New economic and enterprise needs have increased the interest and utility of the methods of the grouping process based on the theory of uncertainty. A fuzzy grouping (clustering) process is a key phase of knowledge acquisition and reduction complexity regarding different groups of objects. Here, we considered some elements of the theory of affinities and uncertain pretopology that form a significant support tool for a fuzzy clustering process. A Galois lattice is introduced in order to provide a clearer vision of the results. We made an homogeneous grouping process of the economic regions of Russian Federation and Ukraine. The obtained results gave us a large panorama of a regional economic situation of two countries as well as the key guidelines for the decision-making. The mathematical method is very sensible to any changes the regional economy can have. We gave an alternative method of the grouping process under uncertainty.