857 resultados para Equality Scheme
Resumo:
Mixed convection on the flow past a heated length and past a porous cavity located in a horizontal wall bounding a saturated porous medium is numerically simulated. The cavity is heated from below. The steady-state regime is studied for several intensities of the buoyancy effects due to temperature variations. The influences of Péclet and Rayleigh numbers on the flow pattern and the temperature distributions are examined. Local and global Nusselt numbers are reported for the heated surface. The convective-diffusive fluxes at the volume boundaries are represented using the UNIFAES, Unified Finite Approach Exponential-type Scheme, with the Power-Law approximation to reduce the computing time. The conditions established by Rivas for the quadratic order of accuracy of the central differencing to be maintained in irregular grids are shown to be extensible to other quadratic schemes, including UNIFAES, so that accuracy estimates could be obtained.
Resumo:
Naomi Shinomiya Hell was the first researcher to investigate the physiological adaptations to a meal-feeding scheme (MFS) in Brazil. Over a period of 20 years, from 1979 to 1999, Naomi's group determined the physiological and metabolic adaptations induced by this feeding scheme in rats. The group showed the persistence of such adaptations even when MFS is associated with moderate exercise training and the performance to a session of intense physical effort. The metabolic changes induced by the feeding training were discriminated from those caused by the effective fasting period. Naomi made an important contribution to the understanding of the MFS but a lot still has to be done. One crucial question still remains to be satisfactorily answered: what is the ideal control for the MFS?
Resumo:
Tesis (Master of Science in Electrical Engineering) UANL, 2014.
Resumo:
Based on a close reading of the debate between Rawls and Sen on primary goods versus capabilities, I argue that liberal theory cannot adequately respond to Sen’s critique within a conventionally neutralist framework. In support of the capability approach, I explain why and how it defends a more robust conception of opportunity and freedom, along with public debate on substantive questions about well-being and the good life. My aims are: (i) to show that Sen’s capability approach is at odds with Rawls’s political liberal version of neutrality; (ii) to carve out a third space in the neutrality debate; and (iii) to begin to develop, from Sen’s approach, the idea of public value liberalism as a position that falls within that third space.
Resumo:
Ce mémoire vise à recenser les avantages et les inconvénients de l'utilisation du langage de programmation fonctionnel dynamique Scheme pour le développement de jeux vidéo. Pour ce faire, la méthode utilisée est d'abord basée sur une approche plus théorique. En effet, une étude des besoins au niveau de la programmation exprimés par ce type de développement, ainsi qu'une description détaillant les fonctionnalités du langage Scheme pertinentes au développement de jeux vidéo sont données afin de bien mettre en contexte le sujet. Par la suite, une approche pratique est utilisée en effectuant le développement de deux jeux vidéo de complexités croissantes: Space Invaders et Lode Runner. Le développement de ces jeux vidéo a mené à l'extension du langage Scheme par plusieurs langages spécifiques au domaine et bibliothèques, dont notamment un système de programmation orienté objets et un système de coroutines. L'expérience acquise par le développement de ces jeux est finalement comparée à celle d'autres développeurs de jeux vidéo de l'industrie qui ont utilisé Scheme pour la création de titres commerciaux. En résumé, l'utilisation de ce langage a permis d'atteindre un haut niveau d'abstraction favorisant la modularité des jeux développés sans affecter les performances de ces derniers.
Resumo:
In order to assess to the degree to which the provision of economic incentives can result in justified inequalities, we need to distinguish between compensatory incentive payments and non-compensatory incentive payments. From a liberal egalitarian perspective, economic inequalities traceable to the provision of compensatory incentive payments are generally justifiable. However, economic inequalities created by the provision of non-compensatory incentive payments are more problematic. I argue that in non-ideal circumstances justice may permit and even require the provision of non-compensatory incentives despite the fact that those who receive non-compensatory payments are not entitled to them. In some circumstances, justice may require us to accede to unreasonable demands for incentive payments by hard bargainers. This leads to a kind of paradox: from a systemic point of view, non-compensatory incentive payments can be justified even though those who receive them have no just claim to them.
Resumo:
Dans cet article je considère un récent défi à l’égalitarisme développé par Michael Huemer. Le challenge de Huemer prend la forme d’un dilemme : les égalitaristes peuvent être soit atomistes soit holistes en ce qui concerne la valeur de l’égalité. S’ils sont atomistes, alors ils doivent acceptés que l’égalité n’ait pas de valeur intrinsèque ; s’ils sont holistes, alors leur point ce vue est inconsistant avec une intuitive mais très plausible forme de conséquentialisme. Je montre que ce dilemme ne doit pas perturber les égalitaristes. Ils peuvent être holistes en ce qui concerne la valeur et adhérer en même temps au conséqeuntialisme.
Resumo:
Dans le but d’optimiser la représentation en mémoire des enregistrements Scheme dans le compilateur Gambit, nous avons introduit dans celui-ci un système d’annotations de type et des vecteurs contenant une représentation abrégée des enregistrements. Ces derniers omettent la référence vers le descripteur de type et l’entête habituellement présents sur chaque enregistrement et utilisent plutôt un arbre de typage couvrant toute la mémoire pour retrouver le vecteur contenant une référence. L’implémentation de ces nouvelles fonctionnalités se fait par le biais de changements au runtime de Gambit. Nous introduisons de nouvelles primitives au langage et modifions l’architecture existante pour gérer correctement les nouveaux types de données. On doit modifier le garbage collector pour prendre en compte des enregistrements contenants des valeurs hétérogènes à alignements irréguliers, et l’existence de références contenues dans d’autres objets. La gestion de l’arbre de typage doit aussi être faite automatiquement. Nous conduisons ensuite une série de tests de performance visant à déterminer si des gains sont possibles avec ces nouvelles primitives. On constate une amélioration majeure de performance au niveau de l’allocation et du comportement du gc pour les enregistrements typés de grande taille et des vecteurs d’enregistrements typés ou non. De légers surcoûts sont toutefois encourus lors des accès aux champs et, dans le cas des vecteurs d’enregistrements, au descripteur de type.
Resumo:
n the recent years protection of information in digital form is becoming more important. Image and video encryption has applications in various fields including Internet communications, multimedia systems, medical imaging, Tele-medicine and military communications. During storage as well as in transmission, the multimedia information is being exposed to unauthorized entities unless otherwise adequate security measures are built around the information system. There are many kinds of security threats during the transmission of vital classified information through insecure communication channels. Various encryption schemes are available today to deal with information security issues. Data encryption is widely used to protect sensitive data against the security threat in the form of “attack on confidentiality”. Secure transmission of information through insecure communication channels also requires encryption at the sending side and decryption at the receiving side. Encryption of large text message and image takes time before they can be transmitted, causing considerable delay in successive transmission of information in real-time. In order to minimize the latency, efficient encryption algorithms are needed. An encryption procedure with adequate security and high throughput is sought in multimedia encryption applications. Traditional symmetric key block ciphers like Data Encryption Standard (DES), Advanced Encryption Standard (AES) and Escrowed Encryption Standard (EES) are not efficient when the data size is large. With the availability of fast computing tools and communication networks at relatively lower costs today, these encryption standards appear to be not as fast as one would like. High throughput encryption and decryption are becoming increasingly important in the area of high-speed networking. Fast encryption algorithms are needed in these days for high-speed secure communication of multimedia data. It has been shown that public key algorithms are not a substitute for symmetric-key algorithms. Public key algorithms are slow, whereas symmetric key algorithms generally run much faster. Also, public key systems are vulnerable to chosen plaintext attack. In this research work, a fast symmetric key encryption scheme, entitled “Matrix Array Symmetric Key (MASK) encryption” based on matrix and array manipulations has been conceived and developed. Fast conversion has been achieved with the use of matrix table look-up substitution, array based transposition and circular shift operations that are performed in the algorithm. MASK encryption is a new concept in symmetric key cryptography. It employs matrix and array manipulation technique using secret information and data values. It is a block cipher operated on plain text message (or image) blocks of 128 bits using a secret key of size 128 bits producing cipher text message (or cipher image) blocks of the same size. This cipher has two advantages over traditional ciphers. First, the encryption and decryption procedures are much simpler, and consequently, much faster. Second, the key avalanche effect produced in the ciphertext output is better than that of AES.
Resumo:
This thesis is a study of -Equality of Opportunity in Public Employment : Judicial Perspectives on Backwardness. This study is an attempt to evaluate the concept of backwardness and equality of opportunity in employment and to assess the judicial perspectives in relation to them. The study reveals that the recent review petition of the Constitution Bench did not assess the decision of Chakradhar and its import. The study reveals that the Indian judiciary could successfully locate and apply the above principles. It was-Justice Subba Rao's nascent attempt in Devadasan which marked the starting point of such a jurisprudential enquiry. Later Thomas developed the thoughts by a reading new meaning and content to equality provisions of the Constitution which included the elimination of inequalities as the positive content of Articles 14 and 16(1) and elevated reservation provision to the same status of equality principles under the Constitution. Soshit, Vasanth Kumar and Mandal supplemented further to the jurisprudential contents. In this process, the courts were guided by the theories of John Rawls, David Miller, Ronald Dworkin, Max Weber and Roscoe Pound. Thus there was a slow and steady process of transformation of the reservation provision. From an anti-meritarian, unenforceable and enabling provision, it reached a stage of equally relevant and explanatory part of fundamental right to equality. Mandal viewed it as a part of sharing of State power. Though this can be seen by rereading and re-joining thoughts of judges in this regard, the judicial approach lacks coherence and concerted efforts in evolving a jurisprudential basis for protective discrimination. The deliberations of the framers of the Constitution reveals that there was much confusion and indeterminacy with regard to the concept of Backwardness. The study shows that the judiciary has been keeping intact the framers’ expectation of having a reasonable quantum of reservation, preventing the undeserved sections from enjoying the benefit, avoiding its abuse and evolving a new criteria and rejecting the old ones.
Resumo:
Health insurance has become a necessity for the common man, next to food, clothing and shelter. The financing of health expense is either catastrophic or sometimes even frequently contracted illnesses, is a major cause of mental agony for the common man. The cost of care may sometimes result in the complete erosion of the family savings or may even lead to indebtedness as many studies on causes of rural indebtedness bear testimony (Jayalakshmi, 2006). A suitable cover by way of health insurance is all that is required to cope with such situations. Health care insurance rightly provides the mechanism for both individuals and families to mitigate the financial burden of medical expenses in the present context. Hence a well designed affordable health insurance policy is the need of the hour.Therefore, it is very significant to study the extent to which the beneficiaries in Kerala make use of the benefits provided by a social health insurance scheme like RSBY-CHIS. Based on the above pertinent points, this study assumes national relevance even though the geographical area of the study is limited to two districts of Kerala. The findings of the study will bring forth valuable inputs on the services availed by the beneficiaries of RSBYCHIS and take appropriate measures to improve the effectiveness of the scheme whereby maximum quality benefit could be availed by the poorest of the poor and develop the scheme as a real dawn of the new era of health for them
Resumo:
Clustering schemes improve energy efficiency of wireless sensor networks. The inclusion of mobility as a new criterion for the cluster creation and maintenance adds new challenges for these clustering schemes. Cluster formation and cluster head selection is done on a stochastic basis for most of the algorithms. In this paper we introduce a cluster formation and routing algorithm based on a mobility factor. The proposed algorithm is compared with LEACH-M protocol based on metrics viz. number of cluster head transitions, average residual energy, number of alive nodes and number of messages lost
Resumo:
While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting
Resumo:
Coded OFDM is a transmission technique that is used in many practical communication systems. In a coded OFDM system, source data are coded, interleaved and multiplexed for transmission over many frequency sub-channels. In a conventional coded OFDM system, the transmission power of each subcarrier is the same regardless of the channel condition. However, some subcarrier can suffer deep fading with multi-paths and the power allocated to the faded subcarrier is likely to be wasted. In this paper, we compute the FER and BER bounds of a coded OFDM system given as convex functions for a given channel coder, inter-leaver and channel response. The power optimization is shown to be a convex optimization problem that can be solved numerically with great efficiency. With the proposed power optimization scheme, near-optimum power allocation for a given coded OFDM system and channel response to minimize FER or BER under a constant transmission power constraint is obtained