916 resultados para Out-group Homogeneity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose Director selection is an important yet under-researched topic. The purpose of this paper is to contribute to extant literature by gaining a greater understanding into how and why new board members are recruited. Design/methodology/approach This exploratory study uses in-depth interviews with Australian non-executive directors to identify what selection criteria are deemed most important when selecting new director candidates and how selection practices vary between organisations. Findings The findings indicate that appointments to the board are based on two key attributes: first, the candidates’ ability to contribute complementary skills and second, the candidates’ ability to work well with the existing board. Despite commonality in these broad criteria, board selection approaches vary considerably between organisations. As a result, some boards do not adequately assess both criteria when appointing a new director hence increasing the chance of a mis-fit between the position and the appointed director. Research limitations/implications The study highlights the importance of both individual technical capabilities and social compatibility in director selections. The authors introduce a new perspective through which future research may consider director selection: fit. Originality/value The in-depth analysis of the director selection process highlights some less obvious and more nuanced issues surrounding directors’ appointment to the board. Recurrent patterns indicate the need for both technical and social considerations. Hence the study is a first step in synthesising the current literature and illustrates the need for a multi-theoretical approach in future director selection research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cognitive scientists were not quick to embrace the functional neuroimaging technologies that emerged during the late 20th century. In this new century, cognitive scientists continue to question, not unreasonably, the relevance of functional neuroimaging investigations that fail to address questions of interest to cognitive science. However, some ultra-cognitive scientists assert that these experiments can never be of relevance to the study of cognition. Their reasoning reflects an adherence to a functionalist philosophy that arbitrarily and purposefully distinguishes mental information-processing systems from brain or brain-like operations. This article addresses whether data from properly conducted functional neuroimaging studies can inform and subsequently constrain the assumptions of theoretical cognitive models. The article commences with a focus upon the functionalist philosophy espoused by the ultra-cognitive scientists, contrasting it with the materialist philosophy that motivates both cognitive neuroimaging investigations and connectionist modelling of cognitive systems. Connectionism and cognitive neuroimaging share many features, including an emphasis on unified cognitive and neural models of systems that combine localist and distributed representations. The utility of designing cognitive neuroimaging studies to test (primarily) connectionist models of cognitive phenomena is illustrated using data from functional magnetic resonance imaging (fMRI) investigations of language production and episodic memory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Meta-analyses estimate a statistical effect size for a test or an analysis by combining results from multiple studies without necessarily having access to each individual study's raw data. Multi-site meta-analysis is crucial for imaging genetics, as single sites rarely have a sample size large enough to pick up effects of single genetic variants associated with brain measures. However, if raw data can be shared, combining data in a "mega-analysis" is thought to improve power and precision in estimating global effects. As part of an ENIGMA-DTI investigation, we use fractional anisotropy (FA) maps from 5 studies (total N=2, 203 subjects, aged 9-85) to estimate heritability. We combine the studies through meta-and mega-analyses as well as a mixture of the two - combining some cohorts with mega-analysis and meta-analyzing the results with those of the remaining sites. A combination of mega-and meta-approaches may boost power compared to meta-analysis alone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ENIGMA (Enhancing NeuroImaging Genetics through Meta-Analysis) Consortium was set up to analyze brain measures and genotypes from multiple sites across the world to improve the power to detect genetic variants that influence the brain. Diffusion tensor imaging (DTI) yields quantitative measures sensitive to brain development and degeneration, and some common genetic variants may be associated with white matter integrity or connectivity. DTI measures, such as the fractional anisotropy (FA) of water diffusion, may be useful for identifying genetic variants that influence brain microstructure. However, genome-wide association studies (GWAS) require large populations to obtain sufficient power to detect and replicate significant effects, motivating a multi-site consortium effort. As part of an ENIGMA-DTI working group, we analyzed high-resolution FA images from multiple imaging sites across North America, Australia, and Europe, to address the challenge of harmonizing imaging data collected at multiple sites. Four hundred images of healthy adults aged 18-85 from four sites were used to create a template and corresponding skeletonized FA image as a common reference space. Using twin and pedigree samples of different ethnicities, we used our common template to evaluate the heritability of tract-derived FA measures. We show that our template is reliable for integrating multiple datasets by combining results through meta-analysis and unifying the data through exploratory mega-analyses. Our results may help prioritize regions of the FA map that are consistently influenced by additive genetic factors for future genetic discovery studies. Protocols and templates are publicly available at (http://enigma.loni.ucla.edu/ongoing/dti-working-group/).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several common genetic variants have recently been discovered that appear to influence white matter microstructure, as measured by diffusion tensor imaging (DTI). Each genetic variant explains only a small proportion of the variance in brain microstructure, so we set out to explore their combined effect on the white matter integrity of the corpus callosum. We measured six common candidate single-nucleotide polymorphisms (SNPs) in the COMT, NTRK1, BDNF, ErbB4, CLU, and HFE genes, and investigated their individual and aggregate effects on white matter structure in 395 healthy adult twins and siblings (age: 20-30 years). All subjects were scanned with 4-tesla 94-direction high angular resolution diffusion imaging. When combined using mixed-effects linear regression, a joint model based on five of the candidate SNPs (COMT, NTRK1, ErbB4, CLU, and HFE) explained ∼ 6% of the variance in the average fractional anisotropy (FA) of the corpus callosum. This predictive model had detectable effects on FA at 82% of the corpus callosum voxels, including the genu, body, and splenium. Predicting the brain's fiber microstructure from genotypes may ultimately help in early risk assessment, and eventually, in personalized treatment for neuropsychiatric disorders in which brain integrity and connectivity are affected.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The pattern of structural brain alterations associated with major depressive disorder (MDD) remains unresolved. This is in part due to small sample sizes of neuroimaging studies resulting in limited statistical power, disease heterogeneity and the complex interactions between clinical characteristics and brain morphology. To address this, we meta-analyzed three-dimensional brain magnetic resonance imaging data from 1728 MDD patients and 7199 controls from 15 research samples worldwide, to identify subcortical brain volumes that robustly discriminate MDD patients from healthy controls. Relative to controls, patients had significantly lower hippocampal volumes (Cohen’s d=−0.14, % difference=−1.24). This effect was driven by patients with recurrent MDD (Cohen’s d=−0.17, % difference=−1.44), and we detected no differences between first episode patients and controls. Age of onset ⩽21 was associated with a smaller hippocampus (Cohen’s d=−0.20, % difference=−1.85) and a trend toward smaller amygdala (Cohen’s d=−0.11, % difference=−1.23) and larger lateral ventricles (Cohen’s d=0.12, % difference=5.11). Symptom severity at study inclusion was not associated with any regional brain volumes. Sample characteristics such as mean age, proportion of antidepressant users and proportion of remitted patients, and methodological characteristics did not significantly moderate alterations in brain volumes in MDD. Samples with a higher proportion of antipsychotic medication users showed larger caudate volumes in MDD patients compared with controls. This currently largest worldwide effort to identify subcortical brain alterations showed robust smaller hippocampal volumes in MDD patients, moderated by age of onset and first episode versus recurrent episode status.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose The purpose of this paper is to explore the concept of service quality for settings where several customers are involved in the joint creation and consumption of a service. The approach is to provide first insights into the implications of a simultaneous multi‐customer integration on service quality. Design/methodology/approach This conceptual paper undertakes a thorough review of the relevant literature before developing a conceptual model regarding service co‐creation and service quality in customer groups. Findings Group service encounters must be set up carefully to account for the dynamics (social activity) in a customer group and skill set and capabilities (task activity) of each of the individual participants involved in a group service experience. Research limitations/implications Future research should undertake empirical studies to validate and/or modify the suggested model presented in this contribution. Practical implications Managers of service firms should be made aware of the implications and the underlying factors of group services in order to create and manage a group experience successfully. Particular attention should be given to those factors that can be influenced by service providers in managing encounters with multiple customers. Originality/value This article introduces a new conceptual approach for service encounters with groups of customers in a proposed service quality model. In particular, the paper focuses on integrating the impact of customers' co‐creation activities on service quality in a multiple‐actor model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Techniques to align spatio-temporal data for large-scale analysis of human group behaviour have been developed. Application of the techniques to sports databases enable sport team's characteristic styles of play to be discovered and compared for tactical analysis. Applications in surveillance to recognise group activities in real-time for person re-identification from low-resolution video footage have also been developed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bottom emitting organic light emitting diodes (OLEDs) can suffer from lower external quantum efficiencies (EQE) due to inefficient out-coupling of the generated light. Herein, it is demonstrated that the current efficiency and EQE of red, yellow, and blue fluorescent single layer polymer OLEDs is significantly enhanced when a MoOx(5 nm)/Ag(10 nm)/MoOx(40 nm) stack is used as the transparent anode in a top emitting OLED structure. A maximum current efficiency and EQE of 21.2 cd/A and 6.7%, respectively, was achieved for a yellow OLED, while a blue OLED achieved a maximum of 16.5 cd/A and 10.1%, respectively. The increase in light out-coupling from the top-emitting OLEDs led to increase in efficiency by a factor of up to 2.2 relative to the optimised bottom emitting devices, which is the best out-coupling reported using solution processed polymers in a simple architecture and a significant step forward for their use in large area lighting and displays.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Drunkenness and the addictive consumption of alcohol remains a key social and public health concern. Advancing beyond traditional individualized prevention approaches, this research explores the role of social influences in determining individual and group influence in moderate-drinking decision-making and participatory actions. A social influence model of intentional moderate drinking actions is conceptualized and validated. Results show group norm as the single social influence predictor of intentions and desire to drink moderately, as opposed to well-known social influence factors (e.g., subjective norm, social identity and drinking contextual effects). Significantly, the peer-group is identified as a key influencer supporting moderate drinking practices, and i-intentions to drink moderately predict group-related we-intentions, which suggests that moderate drinking is a shared goal. These findings advance alcohol prevention research drawing attention to the power of group dynamics to support positive changes in youth drinking behaviors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Back in 1995, Peter Drahos wrote a futuristic article called ‘Information feudalism in the information society’. It took the form of an imagined history of the information society in the year 2015. Drahos provided a pessimistic vision of the future, in which the information age was ruled by the private owners of intellectual property. He ended with the bleak, Hobbesian image: "It is unimaginable that the information society of the 21st century could be like this. And yet if abstract objects fall out of the intellectual commons and are enclosed by private owners, private, arbitrary, unchecked global power will become a part of life in the information society. A world in which seed rights, algorithms, DNA, and chemical formulas are owned by a few, a world in which information flows can be coordinated by information-media barons, might indeed be information feudalism (p. 222)." This science fiction assumed that a small number of states would dominate the emerging international regulatory order set up under the World Trade Organization. In Information Feudalism: Who Owns the Knowledge Economy?, Peter Drahos and his collaborator John Braithwaite reprise and expand upon the themes first developed in that article. The authors contend: "Information feudalism is a regime of property rights that is not economicallyefficient, and does not get the balance right between rewarding innovation and diffusing it. Like feudalism, it rewards guilds instead of inventive individual citizens. It makes democratic citizens trespassers on knowledge that should be the common heritage of humankind, their educational birthright. Ironically, information feudalism, by dismantling the publicness of knowledge, will eventually rob the knowledge economy of much of its productivity (p. 219)." Drahos and Braithwaite emphasise that the title Information Feudalism is not intended to be taken at face value by literal-minded readers, and crudely equated with medieval feudalism. Rather, the title serves as a suggestive metaphor. It designates the transfer of knowledge from the intellectual commons to private corporation under the regime of intellectual property.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stephen Gray is a writer and law lecturer who has been living in Darwin since 1989. He started out writing formal legal pieces about how copyright law had unsuccessfully sought to accommodate Aboriginal art. Such work led him to further investigate the philosophical questions underlying the legal issues affecting both traditional and urban Indigenous people. Gray has also explored matters of bioprospecting in relation to Indigenous biological resources. He has investigated the introduction of a label of authenticity into Australia. Gray has also published a number of articles about other legal issues affecting Indigenous people. He has explored such topics as native title, customary law, alternative dispute resolution, and criminal law. Gray has recently been awarded The Australian/ Vogel Literary Award for his novel The Artist is a Thief. He was inspired to write a book after being sent out to a community on a possible copyright claim as part of his job in the law faculty of Northern Territory University: "I wrote an academic article and then a more philosophical piece talking about the copyright act and the way it doesn't really protect traditional artists who have a very different view of the place of their art. The pieces were interesting, but I felt there was something more there that needed a fictional expression as well." It is ironic that such a self-conscious and sophisticated meditation upon appropriation and authenticity should win The Australian/ Vogel Literary Award. The inaugural award in 1980 was won by Paul Radley, who later revealed his books were mostly written by his uncle, and in 1993 it was won by Helen Demidenko, aka Darville, who had lied about her Ukrainian background and family history.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In his 1987 book, The Media Lab: Inventing the Future at MIT, Stewart Brand provides an insight into the visions of the future of the media in the 1970s and 1980s. 1 He notes that Nicolas Negroponte made a compelling case for the foundation of a media laboratory at MIT with diagrams detailing the convergence of three sectors of the media—the broadcast and motion picture industry; the print and publishing industry; and the computer industry. Stewart Brand commented: ‘If Negroponte was right and communications technologies really are converging, you would look for signs that technological homogenisation was dissolving old boundaries out of existence, and you would expect an explosion of new media where those boundaries used to be’. Two decades later, technology developers, media analysts and lawyers have become excited about the latest phase of media convergence. In 2006, the faddish Time Magazine heralded the arrival of various Web 2.0 social networking services: You can learn more about how Americans live just by looking at the backgrounds of YouTube videos—those rumpled bedrooms and toy‐strewn basement rec rooms—than you could from 1,000 hours of network television. And we didn’t just watch, we also worked. Like crazy. We made Facebook profiles and Second Life avatars and reviewed books at Amazon and recorded podcasts. We blogged about our candidates losing and wrote songs about getting dumped. We camcordered bombing runs and built open‐source software. America loves its solitary geniuses—its Einsteins, its Edisons, its Jobses—but those lonely dreamers may have to learn to play with others. Car companies are running open design contests. Reuters is carrying blog postings alongside its regular news feed. Microsoft is working overtime to fend off user‐created Linux. We’re looking at an explosion of productivity and innovation, and it’s just getting started, as millions of minds that would otherwise have drowned in obscurity get backhauled into the global intellectual economy. The magazine announced that Time’s Person of the Year was ‘You’, the everyman and everywoman consumer ‘for seizing the reins of the global media, for founding and framing the new digital democracy, for working for nothing and beating the pros at their own game’. This review essay considers three recent books, which have explored the legal dimensions of new media. In contrast to the unbridled exuberance of Time Magazine, this series of legal works displays an anxious trepidation about the legal ramifications associated with the rise of social networking services. In his tour de force, The Future of Reputation: Gossip, Rumor, and Privacy on the Internet, Daniel Solove considers the implications of social networking services, such as Facebook and YouTube, for the legal protection of reputation under privacy law and defamation law. Andrew Kenyon’s edited collection, TV Futures: Digital Television Policy in Australia, explores the intersection between media law and copyright law in the regulation of digital television and Internet videos. In The Future of the Internet and How to Stop It, Jonathan Zittrain explores the impact of ‘generative’ technologies and ‘tethered applications’—considering everything from the Apple Mac and the iPhone to the One Laptop per Child programme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For a hundred years, since Federation, Australian consumers have suffered the indignity and the tragedy of price discrimination. From the time of imperial publishing networks, Australia has been suffered from cultural colonialism. In respect of pricing of copyright works, Australian consumers have been gouged; ripped-off; and exploited. Digital technologies have not necessarily brought an end to such price discrimination. Australian consumers have been locked out by technological protection measures; subject to surveillance, privacy intrusions and security breaches; locked into walled gardens by digital rights management systems; and geo-blocked.