892 resultados para Digital information environment
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
We present a new catalogue of galaxy triplets derived from the Sloan Digital Sky Survey (SDSS) Data Release 7. The identification of systems was performed considering galaxies brighter than Mr=-20.5 and imposing constraints over the projected distances, radial velocity differences of neighbouring galaxies and isolation. To improve the identification of triplets, we employed a data pixelization scheme, which allows us to handle large amounts of data as in the SDSS photometric survey. Using spectroscopic and photometric data in the redshift range 0.01 =z= 0.40, we obtain 5901 triplet candidates. We have used a mock catalogue to analyse the completeness and contamination of our methods. The results show a high level of completeness ( 80 per cent) and low contamination ( 5 per cent). By using photometric and spectroscopic data, we have also addressed the effects of fibre collisions in the spectroscopic sample. We have defined an isolation criterion considering the distance of the triplet brightest galaxy to the closest neighbour cluster, to describe a global environment, as well as the galaxies within a fixed aperture, around the triplet brightest galaxy, to measure the local environment. The final catalogue comprises 1092 isolated triplets of galaxies in the redshift range 0.01 =z= 0.40. Our results show that photometric redshifts provide very useful information, allowing us to complete the sample of nearby systems whose detection is affected by fibre collisions, as well as extending the detection of triplets to large distances, where spectroscopic redshifts are not available.
Resumo:
This thesis presents the outcomes of a Ph.D. course in telecommunications engineering. It is focused on the optimization of the physical layer of digital communication systems and it provides innovations for both multi- and single-carrier systems. For the former type we have first addressed the problem of the capacity in presence of several nuisances. Moreover, we have extended the concept of Single Frequency Network to the satellite scenario, and then we have introduced a novel concept in subcarrier data mapping, resulting in a very low PAPR of the OFDM signal. For single carrier systems we have proposed a method to optimize constellation design in presence of a strong distortion, such as the non linear distortion provided by satellites' on board high power amplifier, then we developed a method to calculate the bit/symbol error rate related to a given constellation, achieving an improved accuracy with respect to the traditional Union Bound with no additional complexity. Finally we have designed a low complexity SNR estimator, which saves one-half of multiplication with respect to the ML estimator, and it has similar estimation accuracy.
Resumo:
This study will look at the passenger air bag (PAB) performance in a fix vehicle environment using Partial Low Risk Deployment (PLRD) as a strategy. This development will follow test methods against actual baseline vehicle data and Federal Motor Vehicle Safety Standards 208 (FMVSS 208). FMVSS 208 states that PAB compliance in vehicle crash testing can be met using one of three deployment methods. The primary method suppresses PAB deployment, with the use of a seat weight sensor or occupant classification sensor (OCS), for three-year old and six-year old occupants including the presence of a child seat. A second method, PLRD allows deployment on all size occupants suppressing only for the presents of a child seat. A third method is Low Risk Deployment (LRD) which allows PAB deployment in all conditions, all statures including any/all child seats. This study outlines a PLRD development solution for achieving FMVSS 208 performance. The results of this study should provide an option for system implementation including opportunities for system efficiency and other considerations. The objective is to achieve performance levels similar too or incrementally better than the baseline vehicles National Crash Assessment Program (NCAP) Star rating. In addition, to define systemic flexibility where restraint features can be added or removed while improving occupant performance consistency to the baseline. A certified vehicles’ air bag system will typically remain in production until the vehicle platform is redesigned. The strategy to enable the PLRD hypothesis will be to first match the baseline out of position occupant performance (OOP) for the three and six-year old requirements. Second, improve the 35mph belted 5th percentile female NCAP star rating over the baseline vehicle. Third establish an equivalent FMVSS 208 certification for the 25mph unbelted 50th percentile male. FMVSS 208 high-speed requirement defines the federal minimum crash performance required for meeting frontal vehicle crash-test compliance. The intent of NCAP 5-Star rating is to provide the consumer with information about crash protection, beyond what is required by federal law. In this study, two vehicles segments were used for testing to compare and contrast to their baseline vehicles performance. Case Study 1 (CS1) used a cross over vehicle platform and Case Study 2 (CS2) used a small vehicle segment platform as their baselines. In each case study, the restraints systems were from different restraint supplier manufactures and each case contained that suppliers approach to PLRD. CS1 incorporated a downsized twins shaped bag, a carryover inflator, standard vents, and a strategic positioned bag diffuser to help disperse the flow of gas to improve OOP. The twin shaped bag with two segregated sections (lobes) to enabled high-speed baseline performance correlation on the HYGE Sled. CS2 used an A-Symmetric (square shape) PAB with standard size vents, including a passive vent, to obtain OOP similar to the baseline. The A-Symmetric shape bag also helped to enabled high-speed baseline performance improvements in HYGE Sled testing in CS2. The anticipated CS1 baseline vehicle-pulse-index (VPI) target was in the range of 65-67. However, actual dynamic vehicle (barrier) testing was overshadowed with the highest crash pulse from the previous tested vehicles with a VPI of 71. The result from the 35mph NCAP Barrier test was a solid 4-Star (4.7 Star) respectfully. In CS2, the vehicle HYGE Sled development VPI range, from the baseline was 61-62 respectively. Actual NCAP test produced a chest deflection result of 26mm versus the anticipated baseline target of 12mm. The initial assessment of this condition was thought to be due to the vehicles significant VPI increase to 67. A subsequent root cause investigation confirmed a data integrity issue due to the instrumentation. In an effort to establish a true vehicle test data point a second NCAP test was performed but faced similar instrumentation issues. As a result, the chest deflect hit the target of 12.1mm; however a femur load spike, similar to the baseline, now skewed the results. With noted level of performance improvement in chest deflection, the NCAP star was assessed as directional for 5-Star capable performance. With an actual rating of 3-Star due to instrumentation, using data extrapolation raised the ratings to 5-Star. In both cases, no structural changes were made to the surrogate vehicle and the results in each case matched their perspective baseline vehicle platforms. These results proved the PLRD is viable for further development and production implementation.
Resumo:
In the face of increasing globalisation, and a collision between global communication systems and local traditions, this book offers innovative trans-disciplinary analyses of the value of traditional cultural expressions (TCE) and suggests appropriate protection mechanisms for them. It combines approaches from history, philosophy, anthropology, sociology and law, and charts previously untravelled paths for developing new policy tools and legal designs that go beyond conventional copyright models. Its authors extend their reflections to a consideration of the specific features of the digital environment, which, despite enhancing the risks of misappropriation of traditional knowledge and creativity, may equally offer new opportunities for revitalising indigenous peoples' values and provide for the sustainability of TCE.This book will appeal to scholars interested in multidisciplinary analyses of the fragmentation of international law in the field of intellectual property and traditional cultural expressions. It will also be valuable reading for those working on broader governance and human rights issues.
Resumo:
National and international studies demonstrate that the number of teenagers using the inter-net increases. But even though they actually do have access from different places to the in-formation and communication pool of the internet, there is evidence that the ways in which teenagers use the net - regarding the scope and frequency in which services are used as well as the preferences for different contents of these services - differ significantly in relation to socio-economic status, education, and gender. The results of the regarding empirical studies may be summarised as such: teenager with low (formal ) education especially use internet services embracing 'entertainment, play and fun' while higher educated teenagers (also) prefer intellectually more demanding and particularly services supplying a greater variety of communicative and informative activities. More generally, pedagogical and sociological studies investigating "digital divide" in a dif-ferentiated and sophisticated way - i.e. not only in terms of differences between those who do have access to the Internet and those who do not - suggest that the internet is no space beyond 'social reality' (e.g. DiMaggio & Hargittai 2001, 2003; Vogelgesang, 2002; Welling, 2003). Different modes of utilisation, that structure the internet as a social space are primarily a specific contextualisation of the latter - and thus, the opportunities and constraints in virtual world of the internet are not less than those in the 'real world' related to unequal distribu-tions of material, social and cultural resources as well as social embeddings of the actors involved. This fact of inequality is also true regarding the outcomes of using the internet. Empirical and theoretical results concerning forms and processes of networking and commu-nity building - i.e. sociability in the internet, as well as the social embeddings of the users which are mediated through the internet - suggest that net based communication and infor-mation processes may entail the resource 'social support'. Thus, with reference to social work and the task of compensating the reproduction of social disadvantages - whether they are medial or not - the ways in which teenagers get access to and utilize net based social sup-port are to be analysed.
Resumo:
The purpose of the article is to provide first a doctrinal summary of the concept, rules and policy of exhaustion, first, on the international and EU level, and, later, under the law of the United States. Based upon this introduction, the paper turns to the analysis of the doctrine by the pioneer court decisions handed over in the UsedSoft, ReDigi, the German e-book/audio book cases, and the pending Tom Kabinet case from the Netherlands. Questions related to the licence versus sale dichotomy; the so-called umbrella solution; the “new copy theory”, migration of digital copies via the internet; the forward-and-delete technology; the issue of lex specialis and the theory of functional equivalence are covered later on. The author of the present article stresses that the answers given by the respective judges of the referred cases are not the final stop in the discussion. The UsedSoft preliminary ruling and the subsequent German domestic decisions highlight a special treatment for computer programs. On the other hand, the refusal of digital exhaustion in the ReDigi and the audio book/e-book cases might be in accordance with the present wording of copyright law; however, they do not necessarily reflect the proper trends of our ages. The paper takes the position that the need for digital exhaustion is constantly growing in society and amongst businesses. Indeed, there are reasonable arguments in favour of equalizing the resale of works sold in tangible and intangible format. Consequently, the paper urges the reconsideration of the norms on exhaustion on the international and EU level.
Resumo:
Despite current enthusiasm for investigation of gene-gene interactions and gene-environment interactions, the essential issue of how to define and detect gene-environment interactions remains unresolved. In this report, we define gene-environment interactions as a stochastic dependence in the context of the effects of the genetic and environmental risk factors on the cause of phenotypic variation among individuals. We use mutual information that is widely used in communication and complex system analysis to measure gene-environment interactions. We investigate how gene-environment interactions generate the large difference in the information measure of gene-environment interactions between the general population and a diseased population, which motives us to develop mutual information-based statistics for testing gene-environment interactions. We validated the null distribution and calculated the type 1 error rates for the mutual information-based statistics to test gene-environment interactions using extensive simulation studies. We found that the new test statistics were more powerful than the traditional logistic regression under several disease models. Finally, in order to further evaluate the performance of our new method, we applied the mutual information-based statistics to three real examples. Our results showed that P-values for the mutual information-based statistics were much smaller than that obtained by other approaches including logistic regression models.
Resumo:
Digital technologies have often been perceived as imperilling traditional cultural expressions (TCE). This angst has interlinked technical and socio-cultural dimensions. On the technical side, it is related to the affordances of digital media that allow, among other things, instantaneous access to information without real location constraints, data transport at the speed of light and effortless reproduction of the original without any loss of quality. In a socio-cultural context, digital technologies have been regarded as the epitome of globalisation forces - not only driving and deepening the process of globalisation itself but also spreading its effects. The present article examines the validity of these claims and sketches a number of ways in which digital technologies may act as benevolent factors. We illustrate in particular that some digital technologies can be instrumentalised to protect TCE forms, reflecting more appropriately the specificities of TCE as a complex process of creation of identity and culture. The article also seeks to reveal that digital technologies - and more specifically the Internet and the World Wide Web - have had a profound impact on the ways cultural content is created, disseminated, accessed and consumed. We argue that this environment may have generated various opportunities for better accommodating TCE, especially in their dynamic sense of human creativity.
Resumo:
In the face of increasing globalisation, there is a pressing need for innovative trans-disciplinary analyses of the value of traditional cultural expressions (TCE) that also suggest appropriate protection mechanisms for them. The book to which this preface belongs combines approaches from history, philosophy, anthropology, sociology and law, and charts previously untravelled paths for developing new policy tools and legal designs that go beyond conventional copyright models. It reflects also upon the specific features of the digital environment, which, despite enhancing the risks of misappropriation of traditional knowledge and creativity, may equally offer some opportunities for revitalising indigenous peoples' values and provide for the sustainability of TCE.
Resumo:
Digital technologies have profoundly changed not only the ways we create, distribute, access, use and re-use information but also many of the governance structures we had in place. Overall, "older" institutions at all governance levels have grappled and often failed to master the multi-faceted and multi-directional issues of the Internet. Regulatory entrepreneurs have yet to discover and fully mobilize the potential of digital technologies as an influential factor impacting upon the regulability of the environment and as a potential regulatory tool in themselves. At the same time, we have seen a deterioration of some public spaces and lower prioritization of public objectives, when strong private commercial interests are at play, such as most tellingly in the field of copyright. Less tangibly, private ordering has taken hold and captured through contracts spaces, previously regulated by public law. Code embedded in technology often replaces law. Non-state action has in general proliferated and put serious pressure upon conventional state-centered, command-and-control models. Under the conditions of this "messy" governance, the provision of key public goods, such as freedom of information, has been made difficult or is indeed jeopardized.The grand question is how can we navigate this complex multi-actor, multi-issue space and secure the attainment of fundamental public interest objectives. This is also the question that Ian Brown and Chris Marsden seek to answer with their book, Regulating Code, as recently published under the "Information Revolution and Global Politics" series of MIT Press. This book review critically assesses the bold effort by Brown and Marsden.