860 resultados para Power to decide process


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Strategy is highly important for organisational success and the achievement of competitive advantage. Strategy is dynamic and it depends on accurate individual decision-making from medium and high-level managers and executives. Since managers always formulate strategy, its formulation depends mostly on their assertive decisions. Making good decisions is a complex task, even more in today’s business world where a large quantity of information and a dynamic environment forces people to decide without having complete information. As Shafir, Simonson, & Tversky (1993) point out, "the making of decisions, both big and small, is often difficult because of uncertainty and conflict". In this paper the author will explain a basic theoretical framework about top manager's individual decision-making, showing how complex the process of making high-impact decisions is; then, he will compare this theory with one of the most important streams in strategic management, the Resource-Based View (RBV) of the firm. Finally, within the context of individual decision-making and the RBV stream, the author will show how individual decision makers in top management positions constitute a valuable, rare, non-imitable and non-substitutable resource that provides sustained competitive advantage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to assess the knowledge of public school administrators with respect to special education (ESE) law. The study used a sample of 220 public school administrators. A survey instrument was developed consisting of 19 demographic questions and 20 situational scenarios. The scenarios were based on ESE issues of discipline, due process (including IEP procedures), identification, evaluation, placement, and related services. The participants had to decide whether a violation of the ESE child's rights had occurred by marking: (a) Yes, (b) No, or (c) Undecided. An analysis of the scores and demographic information was done using a two-way analysis of variance, chi-square, and crosstabs after a 77% survey response rate.^ Research questions addressed the administrators' overall level of knowledge. Comparisons were made between principals and assistant principals and differences between the levels of schooling. Exploratory questions were concerned with ESE issues deemed problematic by administrators, effects of demographic variables on survey scores, and the listing of resources utilized by administrators to access ESE information.^ The study revealed: (a) a significant difference was found when comparing the number of ESE courses taken and the score on the survey, (b) the top five resources of ESE information were the region office, school ESE department chairs, ESE teachers, county workshops, and county inservices, (c) problematic areas included discipline, evaluation procedures, placement issues, and IEP due process concerns, (d) administrators as a group did not exhibit a satisfactory knowledge of ESE law with a mean score of 12 correct and 74% of responding administrators scoring in the unsatisfactory level (below 70%), (e) across school levels, elementary administrators scored significantly higher than high school administrators, and (f) a significant implication that assistant principals consistently scored higher than principals on each scenario with a significant difference at the high school level.^ The study reveals a vital need for administrators to receive additional preparation in order to possess a basic understanding of ESE school law and how it impacts their respective schools and school districts so that they might meet professional obligations and protect the rights of all individuals involved. Recommendations for this additional administrative preparation and further research topics were discussed. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of the present research is to demonstrate the influence of a fair price (independent of the subjective evaluation of the price magnitude) on buyers' willingness to purchase. The perceived fairness of a price is conceived to have three components: perceived equity, perceived need, and inferred compliance of the seller to the process rules of pricing. These components reflect the Theories of Distributive Justice (as adjusted for conditions of need) and Procedural Justice.^ The effect of the three components of a fair price on willingness to purchase is depicted in a theoretically causal chain model. Based on the Theories of Dissonance and Attribution, conditions of inequity and need activate concerns for Procedural Justice. Under conditions of inequity and need, buyers tend to infer that the seller has not complied with the generally accepted pricing practices, thus violating the social norms of Procedural justice. Inferred violations of Procedural Justice influence the buyer's attitude toward the seller. As predicted by the Theory of Reasoned Action, attitude is then positively related to willingness to purchase.^ The model was tested with a survey-based experiment conducted with 408 respondents. Two levels of both equity and need were manipulated with scenarios, a common research method in studies of Distributive and Procedural Justice. The data were analyzed with a structural equation model using LISREL. Although the effect of the "need" manipulation was insignificant, the results indicated a good fit of the model (Chi-square = 281, Degrees of Freedom = 104, Goodness of Fit Index =.924). The conclusion is that the fairness of a price does have a significant effect on willingness to purchase, independent of the subjective evaluation of the objective price. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A model was tested to examine relationships among leadership behaviors, team diversity, and team process measures with team performance and satisfaction at both the team and leader-member levels of analysis. Relationships between leadership behavior and team demographic and cognitive diversity were hypothesized to have both direct effects on organizational outcomes as well as indirect effects through team processes. Leader member differences were investigated to determine the effects of leader-member diversity leader-member exchange quality, individual effectiveness and satisfaction.^ Leadership had little direct effect on team performance, but several strong positive indirect effects through team processes. Demographic Diversity had no impact on team processes, directly impacted only one performance measure, and moderated the leadership to team process relationship.^ Cognitive Diversity had a number of direct and indirect effects on team performance, the net effects uniformly positive, and did not moderate the leadership to team process relationship.^ In sum, the team model suggests a complex combination of leadership behaviors positively impacting team processes, demographic diversity having little impact on team process or performance, cognitive diversity having a positive net impact impact, and team processes having mixed effects on team outcomes.^ At the leader-member level, leadership behaviors were a strong predictor of Leader-Member Exchange (LMX) quality. Leader-member demographic and cognitive dissimilarity were each predictors of LMX quality, but failed to moderate the leader behavior to LMX quality relationship. LMX quality was strongly and positively related to self reported effectiveness and satisfaction.^ The study makes several contributions to the literature. First, it explicitly links leadership and team diversity. Second, demographic and cognitive diversity are conceptualized as distinct and multi-faceted constructs. Third, a methodology for creating an index of categorical demographic and interval cognitive measures is provided so that diversity can be measured in a holistic conjoint fashion. Fourth, the study simultaneously investigates the impact of diversity at the team and leader-member levels of analyses. Fifth, insights into the moderating impact of different forms of team diversity on the leadership to team process relationship are provided. Sixth, this study incorporates a wide range of objective and independent measures to provide a 360$\sp\circ$ assessment of team performance. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The total time a customer spends in the business process system, called the customer cycle-time, is a major contributor to overall customer satisfaction. Business process analysts and designers are frequently asked to design process solutions with optimal performance. Simulation models have been very popular to quantitatively evaluate the business processes; however, simulation is time-consuming and it also requires extensive modeling experiences to develop simulation models. Moreover, simulation models neither provide recommendations nor yield optimal solutions for business process design. A queueing network model is a good analytical approach toward business process analysis and design, and can provide a useful abstraction of a business process. However, the existing queueing network models were developed based on telephone systems or applied to manufacturing processes in which machine servers dominate the system. In a business process, the servers are usually people. The characteristics of human servers should be taken into account by the queueing model, i.e. specialization and coordination. ^ The research described in this dissertation develops an open queueing network model to do a quick analysis of business processes. Additionally, optimization models are developed to provide optimal business process designs. The queueing network model extends and improves upon existing multi-class open-queueing network models (MOQN) so that the customer flow in the human-server oriented processes can be modeled. The optimization models help business process designers to find the optimal design of a business process with consideration of specialization and coordination. ^ The main findings of the research are, first, parallelization can reduce the cycle-time for those customer classes that require more than one parallel activity; however, the coordination time due to the parallelization overwhelms the savings from parallelization under the high utilization servers since the waiting time significantly increases, thus the cycle-time increases. Third, the level of industrial technology employed by a company and coordination time to mange the tasks have strongest impact on the business process design; as the level of industrial technology employed by the company is high; more division is required to improve the cycle-time; as the coordination time required is high; consolidation is required to improve the cycle-time. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, wireless communication infrastructures have been widely deployed for both personal and business applications. IEEE 802.11 series Wireless Local Area Network (WLAN) standards attract lots of attention due to their low cost and high data rate. Wireless ad hoc networks which use IEEE 802.11 standards are one of hot spots of recent network research. Designing appropriate Media Access Control (MAC) layer protocols is one of the key issues for wireless ad hoc networks. ^ Existing wireless applications typically use omni-directional antennas. When using an omni-directional antenna, the gain of the antenna in all directions is the same. Due to the nature of the Distributed Coordination Function (DCF) mechanism of IEEE 802.11 standards, only one of the one-hop neighbors can send data at one time. Nodes other than the sender and the receiver must be either in idle or listening state, otherwise collisions could occur. The downside of the omni-directionality of antennas is that the spatial reuse ratio is low and the capacity of the network is considerably limited. ^ It is therefore obvious that the directional antenna has been introduced to improve spatial reutilization. As we know, a directional antenna has the following benefits. It can improve transport capacity by decreasing interference of a directional main lobe. It can increase coverage range due to a higher SINR (Signal Interference to Noise Ratio), i.e., with the same power consumption, better connectivity can be achieved. And the usage of power can be reduced, i.e., for the same coverage, a transmitter can reduce its power consumption. ^ To utilizing the advantages of directional antennas, we propose a relay-enabled MAC protocol. Two relay nodes are chosen to forward data when the channel condition of direct link from the sender to the receiver is poor. The two relay nodes can transfer data at the same time and a pipelined data transmission can be achieved by using directional antennas. The throughput can be improved significant when introducing the relay-enabled MAC protocol. ^ Besides the strong points, directional antennas also have some explicit drawbacks, such as the hidden terminal and deafness problems and the requirements of retaining location information for each node. Therefore, an omni-directional antenna should be used in some situations. The combination use of omni-directional and directional antennas leads to the problem of configuring heterogeneous antennas, i e., given a network topology and a traffic pattern, we need to find a tradeoff between using omni-directional and using directional antennas to obtain a better network performance over this configuration. ^ Directly and mathematically establishing the relationship between the network performance and the antenna configurations is extremely difficult, if not intractable. Therefore, in this research, we proposed several clustering-based methods to obtain approximate solutions for heterogeneous antennas configuration problem, which can improve network performance significantly. ^ Our proposed methods consist of two steps. The first step (i.e., clustering links) is to cluster the links into different groups based on the matrix-based system model. After being clustered, the links in the same group have similar neighborhood nodes and will use the same type of antenna. The second step (i.e., labeling links) is to decide the type of antenna for each group. For heterogeneous antennas, some groups of links will use directional antenna and others will adopt omni-directional antenna. Experiments are conducted to compare the proposed methods with existing methods. Experimental results demonstrate that our clustering-based methods can improve the network performance significantly. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study explores how great powers not allied with the United States formulate their grand strategies in a unipolar international system. Specifically, it analyzes the strategies China and Russia have developed to deal with U.S. hegemony by examining how Moscow and Beijing have responded to American intervention in Central Asia. The study argues that China and Russia have adopted a soft balancing strategy of to indirectly balance the United States at the regional level. This strategy uses normative capabilities such as soft power, alternative institutions and regionalization to offset the overwhelming material hardware of the hegemon. The theoretical and methodological approach of this dissertation is neoclassical realism. Chinese and Russian balancing efforts against the United States are based on their domestic dynamics as well as systemic constraints. Neoclassical realism provides a bridge between the internal characteristics of states and the environment which those states are situated. Because China and Russia do not have the hardware (military or economic power) to directly challenge the United States, they must resort to their software (soft power and norms) to indirectly counter American preferences and set the agenda to obtain their own interests. Neoclassical realism maintains that soft power is an extension of hard power and a reflection of the internal makeup of states. The dissertation uses the heuristic case study method to demonstrate the efficacy of soft balancing. Such case studies help to facilitate theory construction and are not necessarily the demonstrable final say on how states behave under given contexts. Nevertheless, it finds that China and Russia have increased their soft power to counterbalance the United States in certain regions of the world, Central Asia in particular. The conclusion explains how soft balancing can be integrated into the overall balance-of-power framework to explain Chinese and Russian responses to U.S. hegemony. It also suggests that an analysis of norms and soft power should be integrated into the study of grand strategy, including both foreign policy and military doctrine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Higher education is a distribution center of knowledge and economic, social, and cultural power (Cervero & Wilson, 2001). A critical approach to understanding a higher education classroom begins with recognizing the instructor's position of power and authority (Tisdell, Hanley, & Taylor, 2000). The power instructors wield exists mostly unquestioned, allowing for teaching practices that reproduce the existing societal patterns of inequity in the classroom (Brookfield, 2000). ^ The purpose of this hermeneutic phenomenological study was to explore students' experiences with the power of their instructors in a higher education classroom. A hermeneutic phenomenological study intertwines the interpretations of both the participants and the researcher about a lived experience to uncover layers of meaning because the meanings of lived experiences are usually not readily apparent (van Manen, 1990). Fifteen participants were selected using criterion, convenience, and snowball sampling. The primary data gathering method were semi-structured interviews guided by an interview protocol (Creswell, 2003). Data were interpreted using thematic reflection (van Manen, 1990). ^ Three themes emerged from data interpretation: (a) structuring of instructor-student relationships, (b) connecting power to instructor personality, and (c) learning to navigate the terrains of higher education. How interpersonal relationships were structured in a higher education classroom shaped how students perceived power in that higher education classroom. Positive relationships were described using the metaphor of family and a perceived ethic of caring and nurturing by the instructor. As participants were consistently exposed to exercises of instructor power in a higher education classroom, they attributed those exercises of power to particular instructor traits rather than systemic exercises of power. As participants progressed from undergraduate to graduate studies, they perceived the benefits of expertise in content or knowledge development as secondary to expertise in successfully navigating the social, cultural, political, and interpersonal terrains of higher education. Ultimately, participants expressed that higher education is not about what you know; it is about learning how to play the game. Implications for teaching in higher education and considerations for future research conclude the study.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer's processor. In order to maximize performance, the speeds of the memory and the processor should be equal. However, using memory that always match the speed of the processor is prohibitively expensive. Computer hardware designers have managed to drastically lower the cost of the system with the use of memory caches by sacrificing some performance. A cache is a small piece of fast memory that stores popular data so it can be accessed faster. Modern computers have evolved into a hierarchy of caches, where a memory level is the cache for a larger and slower memory level immediately below it. Thus, by using caches, manufacturers are able to store terabytes of data at the cost of cheapest memory while achieving speeds close to the speed of the fastest one.^ The most important decision about managing a cache is what data to store in it. Failing to make good decisions can lead to performance overheads and over-provisioning. Surprisingly, caches choose data to store based on policies that have not changed in principle for decades. However, computing paradigms have changed radically leading to two noticeably different trends. First, caches are now consolidated across hundreds to even thousands of processes. And second, caching is being employed at new levels of the storage hierarchy due to the availability of high-performance flash-based persistent media. This brings four problems. First, as the workloads sharing a cache increase, it is more likely that they contain duplicated data. Second, consolidation creates contention for caches, and if not managed carefully, it translates to wasted space and sub-optimal performance. Third, as contented caches are shared by more workloads, administrators need to carefully estimate specific per-workload requirements across the entire memory hierarchy in order to meet per-workload performance goals. And finally, current cache write policies are unable to simultaneously provide performance and consistency guarantees for the new levels of the storage hierarchy.^ We addressed these problems by modeling their impact and by proposing solutions for each of them. First, we measured and modeled the amount of duplication at the buffer cache level and contention in real production systems. Second, we created a unified model of workload cache usage under contention to be used by administrators for provisioning, or by process schedulers to decide what processes to run together. Third, we proposed methods for removing cache duplication and to eliminate wasted space because of contention for space. And finally, we proposed a technique to improve the consistency guarantees of write-back caches while preserving their performance benefits.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A wide range of non-destructive testing (NDT) methods for the monitoring the health of concrete structure has been studied for several years. The recent rapid evolution of wireless sensor network (WSN) technologies has resulted in the development of sensing elements that can be embedded in concrete, to monitor the health of infrastructure, collect and report valuable related data. The monitoring system can potentially decrease the high installation time and reduce maintenance cost associated with wired monitoring systems. The monitoring sensors need to operate for a long period of time, but sensors batteries have a finite life span. Hence, novel wireless powering methods must be devised. The optimization of wireless power transfer via Strongly Coupled Magnetic Resonance (SCMR) to sensors embedded in concrete is studied here. First, we analytically derive the optimal geometric parameters for transmission of power in the air. This specifically leads to the identification of the local and global optimization parameters and conditions, it was validated through electromagnetic simulations. Second, the optimum conditions were employed in the model for propagation of energy through plain and reinforced concrete at different humidity conditions, and frequencies with extended Debye's model. This analysis leads to the conclusion that SCMR can be used to efficiently power sensors in plain and reinforced concrete at different humidity levels and depth, also validated through electromagnetic simulations. The optimization of wireless power transmission via SMCR to Wearable and Implantable Medical Device (WIMD) are also explored. The optimum conditions from the analytics were used in the model for propagation of energy through different human tissues. This analysis shows that SCMR can be used to efficiently transfer power to sensors in human tissue without overheating through electromagnetic simulations, as excessive power might result in overheating of the tissue. Standard SCMR is sensitive to misalignment; both 2-loops and 3-loops SCMR with misalignment-insensitive performances are presented. The power transfer efficiencies above 50% was achieved over the complete misalignment range of 0°-90° and dramatically better than typical SCMR with efficiencies less than 10% in extreme misalignment topologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The deployment of wireless communications coupled with the popularity of portable devices has led to significant research in the area of mobile data caching. Prior research has focused on the development of solutions that allow applications to run in wireless environments using proxy based techniques. Most of these approaches are semantic based and do not provide adequate support for representing the context of a user (i.e., the interpreted human intention.). Although the context may be treated implicitly it is still crucial to data management. In order to address this challenge this dissertation focuses on two characteristics: how to predict (i) the future location of the user and (ii) locations of the fetched data where the queried data item has valid answers. Using this approach, more complete information about the dynamics of an application environment is maintained. ^ The contribution of this dissertation is a novel data caching mechanism for pervasive computing environments that can adapt dynamically to a mobile user's context. In this dissertation, we design and develop a conceptual model and context aware protocols for wireless data caching management. Our replacement policy uses the validity of the data fetched from the server and the neighboring locations to decide which of the cache entries is less likely to be needed in the future, and therefore a good candidate for eviction when cache space is needed. The context aware driven prefetching algorithm exploits the query context to effectively guide the prefetching process. The query context is defined using a mobile user's movement pattern and requested information context. Numerical results and simulations show that the proposed prefetching and replacement policies significantly outperform conventional ones. ^ Anticipated applications of these solutions include biomedical engineering, tele-health, medical information systems and business. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Actually, Brazil is one of the larger fruit producer worldwide, with most of its production being consumed in nature way or either as juice or pulp. It is important to highlig ht in the fruit productive chain there are a lot lose due mainly to climate reasons, as well as storage, transportation, season, market, etc. It is known that in the pulp and fruit processing industy a yield of 50% (in mass) is usually obtained, with the other part discarded as waste. However, since most this waste has a high nutrient content it can be used to generate added - value products. In this case, drying plays an important role as an alternative process in order to improve these wastes generated by the fruit industry. However, despite the advantage of using this technique in order to improve such wastes, issues as a higher power demand as well as the thermal efficiency limitation should be addressed. Therefore, the control of the main variables in t his drying process is quite important in order to obtain operational conditions to produce a final product with the target specification as well as with a lower power cost. M athematical models can be applied to this process as a tool in order to optimize t he best conditions. The main aim of this work was to evaluate the drying behaviour of a guava industrial pulp waste using a batch system with a convective - tray dryer both experimentally and using mathematical modeling. In the experimental study , the dryin g carried out using a group of trays as well as the power consume were assayed as response to the effects of operational conditions (temperature, drying air flow rate and solid mass). Obtained results allowed observing the most significant variables in the process. On the other hand, the phenomenological mathematical model was validated and allowed to follow the moisture profile as well as the temperature in the solid and gas phases in every tray. Simulation results showed the most favorable procedure to o btain the minimum processing time as well as the lower power demand.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From an economic standpoint, the powder metallurgy (P/M) is a technique widely used for the production of small parts. It is possible, through the P/M and prior comminution of solid waste such as ferrous chips, produce highly dense sintered parts and of interest to the automotive, electronics and aerospace industries. However, without prior comminution the chip, the production of bodies with a density equal to theoretical density by conventional sintering techniques require the use of additives or significantly higher temperatures than 1250ºC. An alternative route to the production of sintered bodies with high density compaction from ferrous chips (≤ 850 microns) and solid phase sintering is a compression technique under high pressure (HP). In this work, different compaction pressures to produce a sintered chip of SAE 1050 carbon steel were used. Specifically, the objective was to investigate them, the effect of high pressure compression in the behavior of densification of the sintered samples. Therefore, samples of the chips from the SAE 1050 carbon steel were uniaxially cold compacted at 500 and 2000 MPa, respectively. The green compacts obtained were sintered under carbon atmosphere at 1100 and 1200°C for 90 minutes. The heating rate used was 20°C/min. The starting materials and the sintered bodies were characterized by optical microscopy, SEM, XRD, density measurements (geometric: mass/volume, and pycnometry) and microhardness measurements Vickers and Rockwell hardness. The results showed that the compact produced under 2000 MPa presented relative density values between 93% and 100% of theoretical density and microhardness between 150 HV and 180 HV, respectively. In contrast, compressed under 500 MPa showed a very heterogeneous microstructure, density value below 80% of theoretical density and structural conditions of inadequate specimens for carrying out the hardness and microhardness measurements. The results indicate that use of the high pressure of ferrous chips compression is a promising route to improve the sinterability conditions of this type of material, because in addition to promoting greater compression of the starting material, the external tension acts together with surface tension, functioning as the motive power for sintering process. Additionally, extremely high pressures allow plastic deformation of the material, providing an intimate and extended contact of the particles and eliminating cracks and pores. This tends to reduce the time and / or temperature required for good sintering, avoiding excessive grain growth without the use of additives. Moreover, higher pressures lead to fracture the grains in fragile or ductile materials highly hardened, which provides a starting powder for sintering, thinner, without the risk of contamination present when previous methods are used comminution of the powder.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In line with the process of financialization and globalization of capital, which has intensified in all latitudes of the globe, the world of work is permeated by his determinations arising and also has been (re) setting from numerous changes expressed by example, in the unbridled expansion of temporary forms of work activities, and flexible outsourced by the growth of informality, forming a new morphology of work. However, regardless of how these forms are expressed in concrete materiality, there is something that unifies: all of them are marked by exponentiation of insecurity and hence the numerous negative effects on the lives of individuals who need to sell their labor power to survive. Given this premise, the present work is devoted to study, within the framework of the Brazilian particularities of transition between Fordism and Toyotism, what we call composite settings of the conditions and labor relations processed within the North river- textile industry Grande. To this end, guided by historical and dialectical materialism, we made use of social research in its qualitative aspect, using semi-structured interviews, in addition to literature review, information retrieval and use of field notes. From our raids, we note that between the time span stretching from the 1990s to the current year, the Natal textile industry has been undergoing a process of successive and intense changes in their modus operandi, geared specifically to the organization and labor management causing, concomitantly, several repercussions for the entire working class.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In line with the process of financialization and globalization of capital, which has intensified in all latitudes of the globe, the world of work is permeated by his determinations arising and also has been (re) setting from numerous changes expressed by example, in the unbridled expansion of temporary forms of work activities, and flexible outsourced by the growth of informality, forming a new morphology of work. However, regardless of how these forms are expressed in concrete materiality, there is something that unifies: all of them are marked by exponentiation of insecurity and hence the numerous negative effects on the lives of individuals who need to sell their labor power to survive. Given this premise, the present work is devoted to study, within the framework of the Brazilian particularities of transition between Fordism and Toyotism, what we call composite settings of the conditions and labor relations processed within the North river- textile industry Grande. To this end, guided by historical and dialectical materialism, we made use of social research in its qualitative aspect, using semi-structured interviews, in addition to literature review, information retrieval and use of field notes. From our raids, we note that between the time span stretching from the 1990s to the current year, the Natal textile industry has been undergoing a process of successive and intense changes in their modus operandi, geared specifically to the organization and labor management causing, concomitantly, several repercussions for the entire working class.