Hostname: page-component-745bb68f8f-b95js Total loading time: 0 Render date: 2025-02-04T16:43:19.644Z Has data issue: false hasContentIssue false

Using semantic feature norms to investigate how the visual and verbal modes afford metaphor construction and expression*

Published online by Cambridge University Press:  18 October 2016

MARIANNA BOLOGNESI*
Affiliation:
University of Amsterdam, the Netherlands
Rights & Permissions [Opens in a new window]

Abstract

In this study, two modalities of expression (verbal and visual) are compared and contrasted, in relation to their ability and their limitations to construct and express metaphors. A representative set of visual metaphors and a representative set of linguistic metaphors are here compared, and the semantic similarity between metaphor terms is modeled within the two sets. Such similarity is operationalized in terms of semantic features produced by informants in a property generation task (e.g., McRae et al., 2005). Semantic features provide insights into conceptual content, and play a role in deep conceptual processing, as opposed to shallow linguistic processing. Thus, semantic features appear to be useful for modeling metaphor comprehension, assuming that metaphors are matters of thought rather than simple figures of speech (Lakoff & Johnson, 1980). The question tackled in this paper is whether semantic features can account for the similarity between metaphor terms of both visual and verbal metaphors. For this purpose, a database of semantic features was collected and then used to analyze fifty visual metaphors and fifty verbal metaphors. It was found that the number of semantic features shared between metaphor terms is predicted by the modality of expression of the metaphor: the terms compared in visual metaphors share semantic features, while the terms compared in verbal metaphors do not. This suggests that the two modalities of expression afford different ways to construct and express metaphors.

Type
Research Article
Copyright
Copyright © UK Cognitive Linguistics Association 2016 

1. Introduction

Can the number of shared features between two concepts account for the similarity between the two concepts aligned in metaphors expressed in images versus language?

And what type of information is carried by these semantic features, in relation to the metaphor terms aligned in visual vs. linguistic metaphors? Are there differences in the way these two modalities of expression construct and express metaphors?

Semantic feature norms are basic attributes derived from human experiences with given concepts, which provide insights into core aspects of the content of rich and varied concepts. Such features are typically generated by speakers and standardized by researchers into feature norms, i.e., one-word labels or short sentences that describe a meaningful aspect of a concept, such as perceptual and non-perceptual properties, functions, and taxonomic relations (for example, given the concept “car”, the most frequent semantic features produced by informants typically are “has wheels”, “used for transportation”, and so on). The resulting concept–feature pairs are collected in datasets (such as McRae, Cree, Seidenberg, & McNorgan, Reference McRae, Cree, Seidenberg and McNorgan2005; Vinson & Vigliocco, Reference Vinson and Vigliocco2008; Recchia & Jones, Reference Recchia and Jones2011) and are generally enriched with metadata, such as production frequency and feature type. The latter piece of information is based on different coding schemes, such as, for example, the knowledge-based taxonomy (Wu & Barsalou, Reference Wu and Barsalou2009), the brain region taxonomy (Cree & McRae, Reference Cree and McRae2003), or schemes that have been adapted for the annotation of concrete and abstract concepts (5-category taxonomy in Vinson & Vigliocco, Reference Vinson and Vigliocco2008; 19-category taxonomy in Recchia & Jones, Reference Recchia and Jones2011). Concepts that share many features can be ascribed to the same conceptual category, under the assumption that the degree of similarity between two concepts can be operationalized in terms of feature matching (Tversky, Reference Tversky1977).

The question addressed in this paper is whether (and to what extent) the information encoded in semantic features can account for the similarity between two terms compared in visual or in linguistic metaphors. In this respect, it must be mentioned that a long-standing body of research suggests that the similarity between two terms that are compared in a metaphor can be based on the attributes that belong to the compared entities, which are typically perceptual properties, or on their relational properties, which are typically non-perceptual (e.g., Gentner, Reference Gentner, Vosniadou and Ortony1989). The present study exploits the information encoded in semantic features, which is mainly attributional rather than relational. Footnote 1

The type of information expressed by the collected features for the metaphor terms will be classified and compared across the two different modalities of expression, using a well-established taxonomy of semantic feature types (Wu & Barsalou, Reference Wu and Barsalou2009). This will allow us to observe different distributions of property types that speakers consider salient when they imagine given concepts that are used in metaphors.

Some methodological clarifications need to be made in relation to this study. First, within the body of literature in metaphor studies, the interaction view suggests that the similarity between two terms compared in a metaphor emerges specifically from their interaction, rather than being pre-existing (e.g., Black, Reference Black and Ortony1979). In this respect, the present study does not aim to cover all the possible cognitive operations that underlie metaphor comprehension. Rather, it is limited to modeling (and trying to predict) the functioning of those metaphors in which the similarity is somehow pre-existing (although latent and non-explicit) and is a part of the conceptual content of the metaphor domains, encoded in the semantic features. A number of studies that are currently in preparation will aim at exploring and modeling alternative types of cognitive operations afforded by metaphor (e.g., emergent similarity vs. pre-existing similarity, as addressed here).

Second, a long-standing body of literature in metaphor studies claims that conceptual metaphors are based on image schemas, (generic concepts such as source path goal, containment, or balance) which provide a scaffolding for grounding abstract concepts in human experience (e.g., Johnson, Reference Johnson1987; Lakoff, Reference Lakoff1987; Gibbs, Reference Gibbs2006). To the best of my knowledge there are no studies that explicitly compare and distinguish the type of information and the nature of the semantic representations based on image schemas vs. semantic features. I do believe that both types pertain to the conceptual dimension of meaning, but they provide access to conceptual content at different levels of abstraction, image schemas being more schematic and abstract, and semantic features being more closely tied to the perceptual experiences in which the concepts have been perceived.

In the present study I chose to compare visual metaphors to linguistic metaphors, to investigate how these two modalities afford the construction and representation of metaphor. To the best of my knowledge there are no other studies that aim at comparing the structure and functioning of these two modalities of expression in relation to metaphor. A recent body of work on visual and audiovisual representations has shown in a qualitative fashion how some of the conventional (conceptual) metaphors extracted from language use can be expressed in visuals as well (see, for example, Forceville, Reference Forceville and Fludernik2011; Hidalgo and Kralievic, Reference Hidalgo, Kraljevic, Gonzálvez García, Peña and Pérez-Hernández2011; and Yu, Reference Yu2008), as well as how primary metaphors can be realized visually (Ortiz, Reference Ortiz2011) and even image schemas (Pérez Hernández, Reference Pérez Hernández2014). The approach used by these authors is top-down: given a set of conventional conceptual metaphors, the scholars searched for possible realizations in the visual modality. The purpose of this study is different, in that it aims at observing how each of the two modalities of expression of metaphor functions, independently from one another, in a bottom-up fashion, in order to see potential differences emerge spontaneously. For this reason, the present study should be considered as a first exploratory analysis of these two modalities of expression.

The analyses hereby reported are based on two sets of materials that have been annotated for metaphoricity: a sample of metaphors extracted from the Metaphor Corpus (verbal modality) and a sample of metaphors extracted from the VisMet Corpus materials (visual modality). Given the nature of the two original corpora from which the materials have been sampled, the stimuli used for this analysis are considered to be representative for the two modalities of metaphor expression (respectively: verbal and visual).

Therefore, in this study we did not rely on the same set of conceptual metaphors, manifested in text and in pictures. This, however, would be an interesting (and different) experimental design, that can be tackled in further studies aimed at investigating the relationship between verbal and visual metaphor. Moreover, the results of this investigation, because of the methodology used to perform the analyses, do not prove that visual and verbal metaphors are processed differently in the brain. What the data support, in turn, is that the verbal and visual mode have different affordances for the representation of metaphors.

2. Theoretical background

The importance of semantic features as a proxy of the content of concepts is supported by a variety of studies aimed at unraveling mechanisms of conceptual processing, which suggest that human performance on different tasks involves the activation of such feature-related information. A few examples are:

  • Feature typicality: McRae, Cree, Westmacott, and de Sa (Reference McRae, Cree, Westmacott and de Sa1999) show that participants are faster at verifying that a feature applies to a specific concept if it is strongly rather than weakly correlated with other features of that concept. For example, they verify that “has a long tail” is a feature of rat faster than pony, because such a feature is highly correlated with the other features of rat, but not of pony. This suggests that the typicality of features predicts the retrieval of concepts.

  • Number-of-features (NoF) effects: processing words that have a high number of features (such as ambulance) is easier than processing words with a low number of features (such as ball), in categorization tasks, lexical decision, and naming responses (Pexman, Lupker, & Hino, Reference Pexman, Lupker and Hino2002; Pexman, Holyk, & Monfils, Reference Pexman, Holyk and Monfils2003; Grondin, Lupker, & McRae, Reference Grondin, Lupker and McRae2009). This suggests that words with many features are ‘anchored’ better in our semantic memory.

  • Effects of shared and distinctive features: distinctive features (features shared by only one or two concepts) hold a “privileged status” in meaning processing (Cree, McNorgan, & McRae, Reference Cree, McNorgan and McRae2006). For example, distinctive properties of living things are activated more slowly than shared properties (Randall, Moss, Rodd, Greer, & Tyler, Reference Randall, Moss, Rodd, Greer and Tyler2004).

Studies on semantic features have also been conducted to support the idea that the semantic information carried by these attributes and the semantic information carried by free word associations are different, and that they afford different processing mechanisms. An extensive review of these studies, as well as the implications that they have for conceptual vs. linguistic processing, is presented in Bolognesi, Pilgram, and Van den Heerik (unpublished observations). For example, McRae and Boisvert (Reference McRae and Boisvert1998) and Cree, McRae, and McNorgan (Reference Cree, McRae and McNorgan1999) found evidence of automatic priming for concept pairs that shared semantic features (spider–insect), while the same effect was not found for word pairs that were associatively related according to linguistic word associations (spider–web). Moreover, Wu and Barsalou (Reference Wu and Barsalou2009) showed that free word associations and semantic features that describe given concepts differ, by manipulating the experimental instructions given to participants, who were asked to produce descriptors for given stimuli. By asking participants to produce (i) descriptors for the mentally simulated concept, and (ii) free word associations, they obtained different sets of semantic information. In addition, Barsalou, Santos, Simmons, and Wilson (Reference Barsalou, Santos, Simmons, Wilson, De Vega, Glenberg and Graesser2008) found that, when participants are given a concept and asked to describe it, the linguistic associations peak before the semantic features that reveal a deeper mental simulation of the corresponding referent. Simmons, Hamann, Harenski, Hu, and Barsalou (Reference Simmons, Hamann, Harenski, Hu and Barsalou2008) added evidence from neuroimaging experiments, showing that when participants are given a concept and asked to describe it, they first use word associations (which activate left-hemisphere language areas such as Broca’s area), and then produce semantic features (which activate bilateral posterior areas, typically involved in mental imagery).

Assuming that metaphors are matters of thought (see Conceptual Metaphor Theory, from now on CMT: Lakoff & Johnson, Reference Lakoff and Johnson1980) rather than shallow figures of speech, semantic features, which tap into conceptual knowledge rather than shallow word associations, should provide insights into the similarity between concepts aligned in metaphors, regardless of the modality in which the metaphor is expressed. Qualitative differences between the types of features that account for the similarity of the domains, on the other hand, are expected, because of the peculiarity of each modality of expression.

In this respect, a variety of empirical studies suggest that pictures have privileged access to semantic memory compared to words (Shelton & Caramazza, Reference Shelton and Caramazza1999), and constitute a more direct realization of the underlying concepts compared to words (Binder, Desai, Graves, & Conant, Reference Binder, Desai, Graves and Conant2009). According to the literature, not all the perceptual information cued by visual representations of given concepts is encoded in language, or even available for us to be aware of in the first place, and in fact, many functional neuroimaging studies suggest different patterns of activation during matched word and picture recognition tasks (Gorno-Tempini et al., Reference Gorno-Tempini, Price, Josephs, Vandenberghe, Cappa, Kapur and Frackowiak1998; Moore & Price, Reference Moore and Price1999; Chee et al., Reference Chee, Weekes, Lee, Soon, Schreiber, Hoon and Chee2000; Hasson, Levy, Behrmann, Hendler, & Malach, Reference Hasson, Levy, Behrmann, Hendler and Malach2002; Bright, Moss, & Tyler, Reference Bright, Moss and Tyler2004; Gates & Yoon, Reference Gates and Yoon2005; Reinholz & Pollmann, Reference Reinholz and Pollmann2005). Moreover, as indicated in Binder et al.’s (Reference Binder, Desai, Graves and Conant2009) meta-analysis, different studies argue against a complete overlap between the knowledge systems underlying word and object recognition, based on the existence of patients with profound visual object recognition disorders but relatively intact word comprehension (Warrington, Reference Warrington and Frederiks1985; Farah, Reference Farah1990; Davidoff & De Bleser, Reference Davidoff and De Bleser1994).

Finally, a well-known and empirically supported account of cognition suggests that mental representations afford multiple coding strategies. In particular, the Dual Coding Theory claims that two functionally independent but interconnected multimodal systems provide mental representations of our conceptual knowledge. The two systems represent verbal and non-verbal knowledge, respectively (Paivio, Reference Paivio1971, Reference Paivio1986, Reference Paivio2010).

The present study seeks to explore whether the number of shared features between two concepts can account for their relatedness when the two concepts are aligned in an A-is-B metaphor. For this purpose, I chose to compare two modalities of expression (visual and verbal), and to explore whether the two modalities differ, with respect to the number and type of semantic features that the metaphorical domains evoke and (possibly) share.

Given the research questions stated above, and the literature reviewed in the previous paragraphs, I predict that metaphors expressed through images and metaphors expressed through words might trigger different sets of conceptual knowledge about the concepts involved, because the cognitive processing of visual vs. linguistic stimuli affords different routes to the semantic system, as well as different mental representations. In light of this, different types of semantic features might account for the similarity between metaphor terms of visual vs. verbal metaphors, because the modality in which the metaphor is expressed might direct our attention to different aspects of the content of the source and the target.

In order to do this, I collected thousands of instances of semantic features that describe the concepts compared in visual and verbal metaphors in experimental settings, and then compared the features produced for source–target pairs.

3. Method

In this study I analyze the entities compared in a set of visual metaphors and the entities compared in a set of linguistic Footnote 2 metaphors by means of semantic feature elicitation and examination.

The semantic features were collected in an experimental setting, through a property generation task, according to established procedures found in the literature about conceptual categorization (see, for example, McRae et al., Reference McRae, Cree, Seidenberg and McNorgan2005 Footnote 3 ).

In terms of experimental design, the number of overlapping semantic features found between a source domain and a target domain of a (visual or verbal) metaphor constitutes the dependent variable of the experimental study described here, while the modality of expression constitutes the independent variable. In addition to this, the concreteness of the metaphorical domains involved in visual and in verbal metaphors was considered as a second independent variable in the statistical analysis, in order to examine its potential effect on explaining the variance of the dependent variable through a regression analysis. This decision was taken after observing (as described below) that visual metaphors tend to use concepts that are on average more concrete than those used in verbal metaphors.

3.1. participants

Ninety American undergraduate students enrolled in different faculties at different American universities participated in the data collection, and were rewarded with 5 euros each for the completion of the task. The students, both male and female, declared that they were American English native speakers, and were tested at the International Center for Intercultural Exchange in Siena, Italy.

3.2. sampling visual and verbal metaphors

The sets of visual and verbal metaphors used as stimuli to elicit semantic features from the participants were randomly selected from the VU Amsterdam Metaphor Corpus Footnote 4 and the VisMet Corpus. Footnote 5 These corpora are balanced and are representative of the two modalities, and therefore they have modality-specific inherent variability.

A sample of fifty visual metaphors and a sample of fifty linguistic metaphors were randomly selected for the analysis. However, as often happens when dealing with real-world data (see discussion in Goodall, Slater, & Myers, Reference Goodall, Slater and Myers2013), in order to be suitable for the present investigation, the metaphors had to meet a number of criteria, described here:

  1. 1. The selected metaphors had to be taken from different genres. For linguistic metaphors these are academic discourse, conversations, fiction, and news, while for visual metaphors these are advertisements and social campaigns, illustrations, political cartoons, and photographs.

  2. 2. Different types of realization had to be taken into account, when possible. For linguistic metaphors, mainly indirect metaphors were taken into account (i.e., metaphors in which there is a contrast as well as comparison between the contextual and a more basic meaning; see Steen, Dorst, Herrmann, Kaal, Krennmayr, & Pasma, Reference Steen, Dorst, Herrmann, Kaal, Krennmayr and Pasma2010), because of their significantly higher frequency in language use, compared to direct metaphors (see reference above). For visual metaphors, the different types in which source and target domains can be visually cued, according to established models, were taken into account (Forceville, Reference Forceville1996; Phillips & McQuarrie, Reference Phillips and McQuarrie2004).

In addition to these criteria, only the visual metaphors without verbal anchors were taken into account, which means that those images in which the linguistic clues constituted a meaningful part of the metaphor, and helped with its interpretation, were dropped. In this sense, only strictly visual metaphors were included. To achieve this, the visual metaphors were manipulated, so that all the linguistic clues presented in the images were covered, and the images were shown to three participants, who had to interpret the meaning of the image in an informal, think-aloud investigation, without relying on the linguistic information conveyed by the verbal anchors (which was graphically blurred). Those images where at least one participant could not understand the metaphor, because the verbal anchors were covered, were dropped and then replaced, until the total number of fifty visual metaphors was reached.

3.3. identifying source and target domains

The metaphorical terms that are compared in these metaphors were identified through specific procedures that have been proposed in the field of metaphor studies. For linguistic metaphors, the MIPVU procedure was applied (Steen et al., Reference Steen, Dorst, Herrmann, Kaal, Krennmayr and Pasma2010). This procedure relies on the idea that the majority of metaphors found in language are not direct comparisons expressed through words (such as, for example, “my lawyer is a shark”), but are instead words used in a metaphorical way in a given context. In this sense, the majority of linguistic metaphors are expressed indirectly, and they imply the existence of a contrast between the contextual meaning of the word (which is metaphorical) and its basic meaning (which is literal). According to this procedure, given a text with a potentially metaphorical word, the contextual meaning and the basic meaning of that word express the contrast from which the metaphor is created. For example, in the sentence “I see what you mean”, the contextual meaning of see is ‘understand’, while the basic meaning refers to the physical ability. The two meanings are in contrast and therefore the word see is to be considered metaphorical in the linguistic context mentioned above.

For the identification of the metaphor terms involved in visual metaphors, the VisMip procedure was applied (Šorm & Steen, unpublished observations). This procedure relies on the idea that visual metaphors typically present (different types of) perceptually incongruous elements that violate the expected scenario and need to be mentally replaced with other elements, whose function is to restore the visual feasibility (i.e., perceptual congruency) of the scenario. Detecting such elements (step 3 of the VisMip procedure) and replacing them with elements that would help restore the expected scenario (step 4) is the type of cognitive operation that needs to be performed to unravel the metaphor. In this sense, the perceptual incongruities and their replacements constitute the metaphor terms (or part of them), or they cue to the abstract concepts that constitute the actual conceptual domains of the metaphor, by means of metonymies. For example, if a car advertisement shows a car frame with a rhino in place of the (expected) internal engine, the animal constitutes the perceptually incongruous unit, which has to be mentally replaced with a real engine in order to restore the expected scenario (i.e., a car with an engine inside, rather than a rhino). In this metaphor, the car engine is therefore compared to a rhino, and this comparison triggers mappings such as power, strength, robustness, etc. Footnote 6

Once the two procedures were applied to (respectively) texts and images, a list of 100 metaphorical correspondences expressed in A-is-B form was obtained. The final list of linguistic and visual metaphors is reported in ‘Appendix 1’, while the final dataset of semantic features will soon be released on the Cogvim project website (http://cogvim.org/).

3.4. collecting semantic features for the concepts previously identified as metaphorical domains

In line with the established databases of semantic features, each concept included in the list of metaphorical domains was described by thirty participants. The total number of unique concepts described amounts to 162 (and not 200, because some concepts appeared in more than one metaphor).

Each participant received a list of sixty stimuli, presented in a randomized order. The instructions provided were carefully prepared on the basis of the analysis of the instructions provided in similar experimental settings (McRae et al., Reference McRae, Cree, Seidenberg and McNorgan2005; Recchia & Jones, Reference Recchia and Jones2011). In particular, I did not want to bias participants toward the production of a specific type of semantic feature (for example visual features, or affective evaluations), but at the same time I wanted to make sure that the task was clear enough, so that the participants would avoid shallow word associations, such as for example words rhyming with the stimulus, or evident linguistic collocations. The instructions are reported in ‘Appendix 2’.

The randomization of the lists proved to be crucial, to avoid biased semantic features, as it appeared that the semantic features produced for a given concept could have been (partially) influenced by the previously described concept. For example, a participant who was asked to describe the concept “eye” produced the feature “organ”, which was also the previous stimulus to be described. Similarly, a participant produced “harsh” in response to the concept “discipline” (previous concept in her list: “harshness”). This priming effect, in which a previous concept was fully or partially used to describe the subsequent concept, was observed in 27 cases (out of 2,068 semantic features produced in response to a concept). For this reason, the randomization of the list order was crucial.

After a first round of data collection, it appeared that some stimuli were ambiguous, and as a consequence many participants described an irrelevant meaning of the concept. For example, given the word discipline, all participants described one of the first two meanings that appear in the dictionary, Footnote 7 which are “the practice of making people obey rules of behavior and punishing them if they do not”, and “a strict set of rules that controls an activity or situation”. This phenomenon could be due to the high cognitive salience of these meanings compared to the others. As a matter of fact, the meanings that the participants describe are ranked first, in the dictionary, while the intended meaning (discipline as topic of study) appears later. Similarly, given the word cream, most participants described the first meaning that appears in the dictionary, which is the liquid derived from milk, rather than the meaning needed for the metaphorical comparison, which was the facial cosmetic. However, in the case of the word space, for example, all the participants described outer space (the universe outside Earth’s atmosphere), which appears only as the fourth meaning in the dictionary, rather than describing the meaning needed for making sense of the metaphorical comparison, which is also the first meaning, i.e., an empty area. This phenomenon could be due to a mismatch between the cognitive salience of a given meaning and its overall frequency of use in language.

A second round of data collection was conducted for those concepts for which the participants described the wrong meaning. In this round the ambiguous meanings were disambiguated with verbal clues provided in brackets next to the stimulus (such as discipline (topic), or force (physical)), as in MRsf (for example: bat (animal); bat (baseball)). Once the semantic features were collected, they were reformatted according to a number of parameters (described in the next section), the aim being to remove spelling mistakes from the data and provide a minimal level of standardization into feature norms, which is necessary for observing the shared features emerging.

3.5. standardizing the collected semantic features

A common trade-off in the standardization of semantic features is between distinguishing specific semantic features, described by the participants, and getting fair degrees of overlapping features among concepts by grouping similar features under the same category (see, for example, Recchia & Jones, Reference Recchia and Jones2011). The literature on semantic feature norms shows different approaches to the standardization of the features into feature norms. The operational criteria used to define what constitutes one feature, starting from the observation of the raw data produced by the participants, vary: in MRsf, for example, the researchers enriched the raw data with key words used as predications to signal feature types (for example “used at” or “is a”), and dropped quantifiers; in VVsf, three native speakers together judged in a discussion whether the multiple terms features were to be separated or kept together, and they grouped synonyms under the same feature.

The criterion adopted in the present study was quite conservative, and aimed at leaving the data as untouched as possible. As a result, the actual overlaps of features among concepts are not high. For example, the features were not grouped under the same lemma, and singular vs. plural versions of the same word were left as they had been produced by the participants. As a consequence, the automatic procedure that computed the similarity between two domains in terms of number of shared features did not count (for example) singular and plural versions of the same word as the same feature. This choice was made after attentive observation of the concept–feature pairs, which revealed that, for example, singular vs. plural versions of the same feature mean different things when referring to certain concepts. Consider the following concept–feature pairs: accumulationobjects , toyobject. In the first case the plural is used to describe a quantity, which is meaningful for the concept accumulation. In the second pair, the singular form is used to define a hypernym of the concept toy. Similarly, in tameranimals and gorillaanimal, the plural used in the first pair is meaningful, as tamers generally work with more than one animal.

Nouns that were produced with a modifier were mainly left as such, because the combination noun–modifier often identified a different referent compared to the noun used alone: consider the concept–feature pair octopusanimal vs. toystuffed_animal. In this case the two features animal and stuffed_animal identify two different referents, and dropping the modifier stuffed would mistakenly bring octopus closer to toy in the multidimensional semantic space generated by the features norms, on the basis of the shared feature animal. Similarly, in the pair sun–star, compared to success–gold_star, star and gold_star identify two different referents, which would be merged if the modifier gold was dropped. An extreme case such as the pair army–air_force, where the modifier in association with the noun constructs a collocation, shows in the clearest way why noun modifiers that have been produced as part of a single property should be left untouched in semantic features. In fact, if air were dropped from air_force, then army would mistakenly come closer to motion, given the existence of the pair motion–force. The modifiers were dropped only when they expressed undefined quantities, as in MRsf (example: accumulation–of_ many_things) or entities not better specified (example: accumulation–too much_of_something ).

As in the air_force case, other cases of semantic collocations also supported the idea that multi-term features should be left untouched. Consider, for example, the pair jail–stripes vs. America–stars_and_stripes. Here the collocation stars_and_stripes clearly identifies a concept (and a referent) that differs from the one identified by stripes in the jail–stripes pair.

Features produced through different parts of speech were also left as such, because they most often referred to different concepts. This was the case with nouns and adjectives (emotion–anger vs. rhino–angry), as well as nouns and verbs (drawing–coloring vs. picture–colors). However, features expressed through verbs, but in different tenses, were grouped together; typically this was the case with verbs expressed in the infinitive and in the -ing form (for example: accumulationcollect and accumulationcollecting).

Finally, articles and most of the prepositions were dropped, unless they constituted part of a phrasal verb or idiom. An exemplar list of raw vs. standardized features is reported in ‘Appendix 3’.

3.6. annotating semantic feature types

The data were organized in a table, where each concept–feature pair was reported, together with the corresponding production frequency (i.e., how many participants produced that feature in response to that concept). As in MRsf, a threshold of five participants was taken into account, which means that features that were produced by fewer than five participants were dropped from the database. Subsequently, each concept–feature pair was annotated on the basis of the established knowledge-based taxonomy and coding scheme proposed by Wu and Barsalou (Reference Wu and Barsalou2009), which was slightly adapted to accommodate the reliable annotation of abstract concepts in addition to concrete ones. The exact adaptations are reported in a separate paper which is currently under review (Bolognesi et al., unpublished observations). In general, we organized the original division into four macro-categories, described in the taxonomy, which identify (i) features related to the entity, (ii) features related to the situational context in which the entity appears, (iii) taxonomic features, and (iv) introspections, and slightly adapted the nested categories within each macro-category. Figure 1 and Figure 2 display the categories of the taxonomy applied to our data. Compared to the original taxonomy (Wu & Barsalou, Reference Wu and Barsalou2009), we merged some of the nested categories into more encompassing ones, because applying them consistently led to discussions among the annotators (for example, distinguishing between features indicating external components and features indicating internal components of a given concept). Moreover, some category descriptions were slightly changed in order to be applicable to abstract concepts (for example, the category identifying concept properties was described so that abstract concepts could also be conceptualized as entities with components and systemic features, just as concrete concepts). For the purpose of this study, it needs to be mentioned that after several rounds of annotations three independent annotators achieved consistent agreement on the annotation of a dataset that contained fifty concepts, and therefore around 700 concept–feature pairs (coefficient of agreement k = .83). A single annotator subsequently finalized the annotation task for the remaining concepts.

Fig. 1. METsf features types, according to the macro-categories described in Wu and Barsalou’s (Reference Wu and Barsalou2009) taxonomy.

Fig. 2. METsf features types, according to the nested categories described in Wu and Barsalou’s (Reference Wu and Barsalou2009) taxonomy.

4. Data analysis

4.1. validating the dataset: correlation between the collected data and established databases of semantic features

In order to evaluate the quality of the semantic features collected, a correlation study was conducted to check the correlation between the collected data and two well-known databases of semantic features (MRsf and RJsf). The correlations were calculated on the forty-one concrete concepts that appeared in both METsf and MRsf, as well as on the forty-six concrete and abstract concepts that appeared in both METsf and RJsf. In addition, the correlation was also calculated between MRsf and RJsf (281 overlapping concepts).

The correlation coefficients were calculated as follows: since the number (and type) of features produced for each concept in the different databases varied, the representations of the concepts across the different databases were not pairwise comparable. For example, bomb has seventeen semantic features in METsf, five in RJsf, and fifteen in MRsf. Therefore, it is not possible to calculate the correlation between the semantic representation of bomb in METsf and RJsf, or METsf and MRsf, because the vectors for bomb have different lengths in the three databases. For this reason, the semantic representations were indirectly compared, through their relative similarities, within each database. In particular, following the procedure to create the similarity matrices described in McRae et al. (Reference McRae, Cree, Seidenberg and McNorgan2005), a table of similarities was computed for the overlapping concepts within each database. For example, for the forty-one overlapping concepts in METsf and MRsf, a contingency table was created first, where the forty-one concepts were displayed in the rows and all the features produced for all the forty-one concepts were displayed in the columns. The production frequencies for each concept-feature were reported in the cells. Then, the cosines Footnote 8 were calculated between each pair of rows (for example: bombmissile: 0.64; bombtrumpet: 0.06; etc.), and reported in the final table of similarities, where the concepts appeared both in rows and columns, and the relative similarity (cosine) between each pair was displayed in the cells. The correlation coefficients were computed on the resulting tables of similarities, and are reported in Table 1. Since the correlations are very high, it can be claimed that the semantic features collected within the METsf database are methodologically and content-wise valid, because they are fully comparable to state-of-the-art semantic feature databases.

table 1. The average Pearson’s correlation coefficients between concepts that are shared across the databases of semantic features

4.2. computing similarity in terms of shared features between source and target domains

The semantic features were compared between the two conceptual domains aligned in every metaphor. On a qualitative basis it immediately emerged that there were many fewer cases of overlap of semantic features between metaphor terms in verbal metaphors compared to visual metaphors. In particular, only five (out of 50) verbal metaphors shared features between the two compared terms, while in visual metaphors this happened for twenty-six (out of 50) metaphors.

The cosine measurement was used to define the similarity between the two compared concepts in a metaphor, as is usual in this type of research (McRae et al., Reference McRae, Cree, Seidenberg and McNorgan2005; Recchia & Jones, Reference Recchia and Jones2011). The computation of the cosines was performed on the pairs of concept vectors extracted from the contingency table created on the basis of the METsf data (see previous section for the construction of the contingency table). Table 2 displays the average cosines (and therefore the similarity) between metaphor terms in the fifty verbal metaphors and in the fifty visual metaphors.

table 2. Average cosines between metaphorical domains in verbal and visual metaphors

As Table 2 shows, the cosines between metaphorical terms of visual metaphors are higher on average than the cosines between metaphorical terms in linguistic metaphors. The difference between the cosines in verbal and in visual metaphors is statistically significant (t = 3.282, p < .01).

This means that the information carried by the semantic features tends to capture the similarity between metaphorical terms aligned in visual metaphors more than in verbal metaphors. This finding suggests that visual and verbal metaphors indeed function in different ways, and in particular that the similarity between metaphorical domains expressed in the two modalities can be modeled by semantic feature matching quite well for visual metaphors, but that this is not the case for verbal metaphors. An in-depth analysis of the types of semantic features that the informants produced in response to the concepts used in visual vs. verbal metaphors was therefore required in order to see how such features differ.

4.3. analyzing feature types

Looking at the types of features that were produced by the participants in response to the concepts involved in visual vs. verbal metaphors, the following qualitative observations, visualized in Figure 1 and Figure 2, can be formulated.

As displayed in Figure 1, considering the 4 macro-categories described in the taxonomy, which identify (i) features related to the entity, (ii) features related to the situational context in which the entity appears, (iii) taxonomic features, and (iv) introspections, it appears that the concepts involved in linguistic and in visual metaphors differ with regard to the type of features that they stimulate. In particular, while for the concepts that appear in visual metaphors the most represented macro-category expresses features that relate to the entity (43.6%), followed by features related to the situation (31.2%), and then taxonomic features (12.9%) and introspections (12.2%), for concepts used in linguistic metaphors the most represented macro-category expresses taxonomic features (33.4%), followed by features related to the situation in which the concept occurs (24.5%), then features related to the entity itself (21.3%), and eventually introspections (20.1%). In other words, important aspects of the meaning of the concepts involved in visual metaphors reside in the features that describe the entity itself (and in particular in its perceptual properties), while for the concepts that are used in linguistic metaphors, important aspects of their meaning reside in the features that express taxonomic relations (such as synonyms and hypernyms of the concept). This supports the idea that the two modalities of metaphor expression (visual and verbal) typically involve concepts for which the most salient features are intrinsically different: for concepts typically involved in visual metaphors the entity-related properties are particularly salient, while for concepts involved in verbal metaphors taxonomic properties are particularly salient.

The percentages with which the nested categories of features types are represented, across the concepts involved in visual and in verbal metaphors, respectively, are reported in Figure 2.

As the graphs show, with regard to introspections, the metaphorical terms involved in visual and linguistic metaphors mainly trigger features that express contingencies (this tendency is more accentuated for concepts that appear in verbal rather than in visual metaphors). Taxonomic features, which are more prominent for the concepts that appear in linguistic, rather than in visual metaphors, are distributed differently: concepts that appear in linguistic metaphors favor subordinates, while concepts that appear in visual metaphors favor superordinates. For the features that express properties of the situation in which a concept appears, concepts that appear in linguistic metaphors favor actions, while concepts that appear in visual metaphors favor functions. Finally, in relation to the macro-category of properties related to the entity, for concepts that appear in visual metaphors the favorite feature type is perceptual properties, while for concepts that appear in linguistic metaphors it is systemic properties. To summarize these tendencies, it appears that the concepts involved in the sample of visual metaphors analyzed here typically evoke semantic features that express perceptual properties about the entity, functions performed by the entity in a situation, contingencies, and superordinates of the concept, when they are imagined and described by participants. Concepts involved in the sample of linguistic metaphors analyzed here typically evoke semantic features that express subordinates of the concept, actions performed by participants in specific situations, contingencies, and systemic properties of the concept itself.

Overall, the different configurations reported here suggest that the concepts involved in visual vs. linguistic metaphors evoke different sets of semantic information per se when they are imagined and described.

Finally, when the concepts are aligned in the A-is-B metaphorical correspondences, the types of features that are shared between the source and the target domain can be described as follows. In visual metaphors, the shared features between source and target domains are primarily properties of the entity described by the concept (80% of the total). In particular, they are primarily perceptual properties of the entity defined by the concepts (33.3%), or systemic properties of the entity (29.2%), or components (20.1%). In only a few cases are they entity behaviors or properties describing the material of the entity.

On the other hand, the five verbal metaphors that presented some overlapping features between the two terms behave as follows: 50% of the shared features describe properties of the entity (of these, 75% are systemic properties), 37.5% are features describing aspects of the situational context, and 12.5% are taxonomic features.

To summarize the results of this analysis, it appears that the concepts involved as metaphor domains in visual vs. verbal metaphors trigger different types of semantic features when they are imagined and described by informants in property generation tasks. A possible explanation for the different type of features produced in response to concepts that appear in visual and in verbal metaphors can be arguably related to the degree of concreteness that these concepts express. For this reason, the concreteness of the concepts involved in the two types of metaphor was taken into account as a second independent variable in the quantitative analysis.

4.4. analyzing the concreteness of source and target domains in visual and in verbal metaphors

An informal observation of the data suggested that visual metaphors typically make use of concrete concepts (because ultimately they have to be depicted graphically), while verbal metaphors typically make use of abstract concepts, which are often compared to more concrete entities that appear as source domains. This observation was quantitatively tested by comparing the average concreteness of the concepts that appeared in visual metaphors to the concepts that appeared in verbal metaphors. The concreteness of each of the 162 concepts was retrieved from the database of concreteness ratings collected by Brysbaert, Warriner, and Kuperman (Reference Brysbaert, Warriner and Kuperman2014). According to their database, in which the ratings vary from 1 (very abstract concept) to 5 (very concrete concept), the average rating for concepts that appear in visual metaphors is 4.7, while for verbal metaphors it is 3.3. The difference between the two concept populations is statistically significant (t = 9.72, p < .01).

4.5. analyzing the modality of expression and concreteness as predictors for semantic feature overlaps: regression analysis

Because visual and verbal metaphors scored differently in relation to the concreteness of the domains, it was necessary to rule out the possibility that the different number (and type) of shared features between metaphorical terms was due to the different degrees of concreteness of the metaphors, rather than to their modality of expression.

For this reason, another analysis was carried out, in which the impact of both modality of expression and concreteness was measured on the dependent variable, i.e., the number of overlapping features between metaphorical terms.

First the correlation between the two independent variables (modality and concreteness) was calculated. The results showed that modality and concreteness are indeed highly correlated (Pearson coefficient r = .79). Because of this, a test was performed to establish whether the data met the assumption of collinearity. Footnote 9 The test indicated that collinearity between modality of expression and concreteness was not a concern (modality of expression, Tolerance = .38, VIF = 2.64; concreteness, Tolerance = .38, VIF = 2.64 Footnote 10 ). Then a regression analysis was conducted to see if modality of expression and concreteness predicted the number of shared features in every metaphor. It was found that while the modality of expression significantly predicts overlapping features between metaphorical terms (Beta = .371; t = 2.50; p = .01), concreteness does not significantly predict overlapping features between metaphorical terms (Beta = .076; t = 0.51; p = .61, n.s.). Therefore it can be concluded that it is the modality of expression that determines the number of shared features, and not the degree of concreteness of the concepts involved.

5. General discussion

The present study explored how the visual and the verbal modality of expression afford metaphor construction and representation, and whether the shared semantic features between two concepts can account for the similarity between them when they are aligned as metaphor terms. The study, therefore, was based on both verbal and visual metaphors, which were compared and contrasted.

Since both metaphors expressed through images and metaphors expressed through words are assumed to be different manifestations of the same phenomenon, and are therefore both conceptual in nature (see CMT: Lakoff & Johnson, Reference Lakoff and Johnson1980), the method adopted from cognitive psychology (i.e., semantic feature norms) was expected to work equally well for both modalities, and possibly to provide different results in terms of types of features that can account for the similarity between domains. This prediction was based on the idea that different modalities of expression would trigger in the viewer/reader different sets of conceptual knowledge about the metaphorical domains, and therefore different types of shared semantic features might account for the similarity between metaphor domains of visual vs. verbal metaphors.

This prediction was partially confirmed: the metaphorical terms identified for the visual metaphors triggered features that mainly describe perceptual and systemic properties of the entity, as well as its function, while the metaphorical terms identified for the verbal metaphors triggered features that mainly describe taxonomic properties and introspections. In particular, using the method of semantic feature norms to explore the relatedness between metaphorical terms proved to be quite a reliable method when applied to visual metaphors, but it did not deliver positive results when it was applied to metaphors expressed through language. As a matter of fact, while for many visual metaphors it was possible to observe overlapping features between the metaphorical domains, this was not the case for verbal metaphors.

The results obtained in this exploratory study, which call for further investigation, could lead to the following explanation. As illustrated at the beginning of this paper, the visual mode of communication has privileged access to memory when compared to the verbal mode. In this sense, semantic features elicited from informants, who were explicitly asked to read a stimulus, create a mental image, and describe its contents, are probably the product of the semantic information which is encoded in the mental imagery. Such information comes into play when two concepts are compared in a metaphor expressed by visual means.

On the other hand, typical verbal metaphors, i.e., metaphor manifestations that are typically expressed indirectly in discourse, might function in a different way, and might need to be grounded in the conceptual system through a different route (rather than through the perceptual information retrieved from the mental imagery). In particular, for the A-is-B verbal correspondences derived from the application of the MIPVU procedure, the similarity between the two terms might need to be retrieved by looking at how the two terms are used in language, rather than how the two terms appear in the mental imagery. The similarity between two terms of a verbal metaphor might be a function of the semantic information that the two terms share in their verbal use. In addition, when the linguistic A-is-B correspondences retrieved through the MIPVU procedure (specifically implemented for linguistic metaphor identification) are analyzed at their conceptual level, and expressed by A-is-B conceptual correspondences (see Lakoff & Johnson, Reference Lakoff and Johnson1980), then the conceptual mappings might emerge in terms of shared semantic features, because our conceptual knowledge comprises information derived from both language and perceptual experiences.

This possible explanation taps into the distinction between a linguistic dimension and a conceptual dimension of meaning, where the linguistic information can be operationalized in terms of language statistics and word co-occurrences (see, for example, Louwerse & Hutchinson, Reference Louwerse and Hutchinson2012), while the conceptual information is formed on the basis of semantic information that we derive from language and the information that we derive from perceptual experience. This distinction seems to be supported in a variety of quite recent studies:

“Multiple systems –not just one– represent knowledge […]: linguistic forms in the brain’s language systems, and situated simulations in the brain’s modal systems” (Barsalou et al., Reference Barsalou, Santos, Simmons, Wilson, De Vega, Glenberg and Graesser2008, p. 245)

“There is increasing evidence from response time experiments that language statistics and perceptual simulations both play a role in conceptual processing” (Louwerse & Hutchinson, Reference Louwerse and Hutchinson2012, p. 1)

“Neither perceptual information alone, nor the sets of correspondences between elements in language alone are likely to be able to amount to the sophistication, scale, and flexibility of the human conceptual system. Luckily, humans receive heaping helpings of both of these types of information.” (Boroditsky & Prinz, Reference Boroditsky, Prinz, Semin and Smith2008, p. 112)

All these studies seem to acknowledge the fact that conceptual knowledge encompasses (at least) two sets of information streams, coming from language and from perceptual experience, and seem to imply that the information conveyed by these two streams of information does not fully overlap. The present exploratory investigation capitalizes on this distinction, suggesting that metaphors expressed through words and metaphors expressed through images might provide access to different information, which we use to make sense of the metaphorical comparison.

A final remark: one of the qualitative observations that emerged from the present investigation, suggests that a fair number of the verbal metaphors taken into account for the analysis seem to relate to structural metaphors such as EMOTION IS A PHYSICAL FORCE and IDEAS ARE OBJECTS, which are highly conventional and share an underlying experiential basis. On the other hand, a fair number of the visual metaphors taken into account seem to be low level, ad-hoc created, and novel, such as MOUTHWASH IS BOMB and RADIO IS A BEGGAR. For these metaphors, the perceptual resemblance is often exploited to make a connection between the two domains. This qualitative observation raises a new research question: How does the modality of expression relate to metaphor conventionality? In particular: Can the modality of expression (visual or verbal) be a predictor for metaphor conventionality? Such questions can be tackled in a future experimental study in which metaphor modality is the independent variable, and metaphor conventionality is the dependent variable.

6. Conclusions

This exploratory study suggests a new paradigm for investigating the conceptual content of metaphorical domains, which is based on a method used in cognitive psychology to explore conceptual cores of concrete and abstract concepts. The initial investigation described here shows qualitative and quantitative differences in the similarity that makes us understand and interpret a metaphorical comparison expressed through words vs. through images.

Such results also bring to light the limitations of the proposed method, which models similarity as a function of the shared features between metaphor domains. In this respect, complementary analyses need to performed in order to investigate those types of metaphors in which the similarity is not pre-existing, but is created ad hoc in specific contexts and through specific (visual or linguistic) means. Also, complementary analyses will need to investigate other types of conceptual similarity, which can be modeled by looking into different streams of semantic information, such as databases of relational properties (to account for the similarity in terms of shared relational properties), as well as databases of language (to account for similarity in terms of shared patterns of linguistic use of given concepts). These two types of analyses are currently ongoing, and will help to unravel the dynamics of visual vs. verbal metaphor conceptual grounding.

Appendix 1

List of linguistic and visual metaphors selected from the Metaphor Corpus and the VisMet Baby corpus, and identified through the MipVu and the VisMip procedures.

Appendix 2

Instructions provided to the participants for the property generation task.

Dear participant,

You will be taking part in the EU sponsored project ‘COGVIM, the cognitive grounding of visual metaphor’, conducted by the University of Amsterdam (Metaphor Lab). Before the research project can begin, it is important that you read about the procedures that we will be applying. Make sure to read this brochure carefully.

Purpose of the research project

Over the course of this experiment, you will be provided with a list of written words, denoting concrete or abstract concepts (such as car, or hope). You will be asked to imagine the correspondent concept, and describe the content of your mental image.

This research is important because it offers insight into the features of our mental simulations for concrete as well as for abstract concepts, which are used to build metaphors. At this stage of the project, we cannot provide any further information on the factors we will be examining. You can receive further details after the experiment has ended.

Procedure

  • - You will be provided with an electronic spreadsheet on a laptop.

  • - On the first row of this sheet there will be 60 words, which define an abstract or a concrete concept.

  • - You are asked to imagine the concept defined by the words, and write down the content of your mental image, as in the examples below.

  • - Your descriptions will be one word labels or short sentences.

  • - You will write down around 10 features for each stimulus, each in one cell on the same column.

  • - When you feel that you don’t have any more features to add, you can move onto the next column, where you will find the next concept.

Appendix 3

Examples of synonymic features standardized under the same label

Concept: ACCUMULATION

Features:

  • - the building up of something / build up > build_up

  • - too much / too much of something > too_much

  • - collect / collecting > collecting

  • - gathered /a gathering of things / some gatherings of things > gathering_of_things

  • - time / over time / over many years > over_time

  • - things / many things > things

  • - large quantity / a quantity > quantity

  • - add / adding / adding up > adding

Footnotes

1 Another study is currently in preparation, and aims at exploring specifically how the relational properties of given concepts can predict the similarity between metaphorical domains in the two modalities of expression. This study will constitute the second work package of COGVIM, a 2-year postdoctoral project on visual metaphor cognitive grounding.

2 As explained in this section, the linguistic metaphors were identified through the MIPVU procedure, and they express a different dimension of metaphor, compared to the conceptual metaphors defined by Lakoff and Johnson (Reference Lakoff and Johnson1980), such as LIFE-IS-JOURNEY. However, the two dimensions (conceptualization and expression) relate to the same phenomenon (see Steen, Reference Steen2008).

3 The dataset of semantic features collected for this study will be referred to as METsf, while the McRae et al. (Reference McRae, Cree, Seidenberg and McNorgan2005) dataset will be referred to as MRsf, the Recchia and Jones (Reference Recchia and Jones2011) dataset as RJsf, and the Vinson and Vigliocco (Reference Vinson and Vigliocco2008) database as VVsf.

5 <http://www.vismet.org/VisMet/>. The selection includes images for which authorization to reproduce is still pending.

6 The reader might comment that, given the same metaphorical image, the domains can be verbalized in different ways, even when the VisMip procedure is carefully applied. I am aware of this potential drawback. However, the same problem applies for the formulation of the A-IS-B comparison in linguistic metaphors (as well as in conceptual ones). For a detailed discussion about this topic, see Bounegru and Forceville, Reference Bounegru and Forceville2011; Forceville, Reference Forceville, Klug and Stöckl2016a, Reference Forceville and Fahlenbrach2016b).

7 MacMillan Dictionary Online.

8 Cosines are commonly used in distributional semantics as metrics to measure the similarity between concept vectors (see, for example, McRae et al., Reference McRae, Cree, Seidenberg and McNorgan2005; Recchia & Jones, Reference Recchia and Jones2011). Cosine values range from –1 to 1; the higher the cosine, the higher the similarity between two concepts.

9 Collinearity might rise when the correlation between two variables is very high (for example above .80)

10 For two variables to be considered collinear it is suggested that Tolerance should be closer to 0 than to 1, and VIF (Variance Inflation Factor) should not exceed 5.

References

references

Barsalou, L. W., Santos, A., Simmons, W. K., & Wilson, C. D. (2008). Language and simulation in conceptual processing. In De Vega, M., Glenberg, A. M., & Graesser, A. C. A. (Eds.), Symbols, embodiment, and meaning (pp. 245283). Oxford: Oxford University Press.CrossRefGoogle Scholar
Binder, J. R., Desai, R. H., Graves, W. W., & Conant, L. L. (2009). Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cerebral Cortex, 19(12), 27672796.CrossRefGoogle ScholarPubMed
Black, M. (1979). More about metaphor. In Ortony, A. (Ed.), Metaphor and thought (pp. 1943). Cambridge: Cambridge University Press.Google Scholar
Boroditsky, L., & Prinz, J. (2008). What thoughts are made of. In Semin, G. & Smith, E. (Eds.), Embodied grounding: social, cognitive, affective, and neuroscientific approaches (pp. 98115). New York: Cambridge University Press.CrossRefGoogle Scholar
Bounegru, L., & Forceville, C. (2011). Metaphors in editorial cartoons representing the global financial crisis. Visual Communication, 10(2), 209229.Google Scholar
Bright, P., Moss, H., & Tyler, L. (2004). Unitary vs. multiple semantics: PET studies of word and picture processing. Brain and Language, 89, 417432.CrossRefGoogle ScholarPubMed
Brysbaert, M., Warriner, A., & Kuperman, V. (2014). Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods, 46, 904911.CrossRefGoogle ScholarPubMed
Chee, M., Weekes, B., Lee, K., Soon, C., Schreiber, A., Hoon, I., & Chee, M. (2000). Overlap and dissociation of semantic processing of Chinese characters, English words, and pictures. Neuroimage, 12, 392403.CrossRefGoogle ScholarPubMed
Cree, G. S., McNorgan, C., & McRae, K. (2006). Distinctive features hold a privileged status in the computation of word meaning: implications for theories of semantic memory. Journal of Experimental Psychology: Learning, Memory, & Cognition, 32, 643658.Google Scholar
Cree, G., & McRae, K. (2003). Analyzing the factors underlying the structure and computation of the meaning of chipmunk, cherry, chisel, cheese, and cello and many other such concrete nouns. Journal of Experimental Psychology, 132, 163201.CrossRefGoogle ScholarPubMed
Cree, G. S., McRae, K., & McNorgan, C. (1999). An attractor model of lexical conceptual processing: simulating semantic priming. Cognitive Science, 23, 371414.CrossRefGoogle Scholar
Davidoff, J., & De Bleser, R. (1994). Impaired picture recognition with preserved object naming and reading. Brain and Cognition, 24, 123.CrossRefGoogle ScholarPubMed
Farah, M. (1990). Visual agnosia: disorders of object recognition and what they tell us about normal vision. Cambridge, MA: MIT Press.Google Scholar
Forceville, C. (1996). Pictorial metaphors in advertising. London: Routledge.CrossRefGoogle Scholar
Forceville, C. (2011). The JOURNEY metaphor and the Source–Path–Goal schema in Agnès Varda’s autobiographical gleaning documentaries. In Fludernik, Monika (Ed.), Beyond Cognitive Metaphor Theory: perspectives on literary metaphor (pp. 281297). London: Routledge.Google Scholar
Forceville, C. (2016a). Pictorial and multimodal metaphor. In Klug, N. & Stöckl, H. (Eds.), Handbuch Sprache im multimodalen Kontext [The Language in Multimodal Contexts Handbook] (Linguistic Knowledge Series). Berlin: Mouton de Gruyter.Google Scholar
Forceville, C. (2016b). Visual and multimodal metaphor in film: charting the field. In Fahlenbrach, K. (Ed.), Embodied metaphors in film, television and video games: cognitive approaches (pp. 1732). London: Routledge.Google Scholar
Gates, L., & Yoon, M. (2005). Distinct and shared cortical regions of the human brain activated by pictorial depictions versus verbal descriptions: an fMRI study. Neuroimage, 24, 473486.CrossRefGoogle ScholarPubMed
Gentner, D. (1989). The mechanisms of analogical learning. In Vosniadou, S. & Ortony, A. (Eds.), Similarity and analogical reasoning (pp. 199241). New York: Cambridge University Press.CrossRefGoogle Scholar
Gibbs, R. J. (2006). Embodiment and cognitive science. Cambridge: Cambridge University Press.Google Scholar
Goodall, C., Slater, M., & Myers, T. (2013). Fear and anger responses to local news coverage of alcohol-related crimes, accidents, and injuries: explaining news effects on policy support using a representative sample of messages and people. Journal of Communication, 63, 373392.CrossRefGoogle ScholarPubMed
Gorno-Tempini, M., Price, C., Josephs, O., Vandenberghe, R., Cappa, S., Kapur, N., & Frackowiak, R. (1998). The neural systems sustaining face and proper-name processing. Brain, 121, 21032118.CrossRefGoogle ScholarPubMed
Grondin, R., Lupker, S. J., & McRae, K. (2009). Shared features dominate semantic richness effects for concrete concepts. Journal of Memory & Language, 60,119.CrossRefGoogle ScholarPubMed
Hasson, U., Levy, I., Behrmann, M., Hendler, T., & Malach, R. (2002). Eccentricity bias as an organizing principle for human high-order object areas. Neuron, 34, 490497.CrossRefGoogle ScholarPubMed
Hidalgo, L., & Kraljevic, B. (2011). Multimodal metonymy and metaphor as complex discourse resources for creativity in ICT advertising discourse. In Gonzálvez García, F., Peña, S., & Pérez-Hernández, L. (Eds.), Metaphor and metonymy revisited beyond the contemporary theory of metaphor (pp. 153178). Amsterdam/Philadelphia: John Benjamins.Google Scholar
Johnson, M. (1987). The body in the mind: the bodily basis of meaning, imagination, and reason. Chicago: University of Chicago Press.CrossRefGoogle Scholar
Lakoff, G. (1987). Women, fire, and dangerous things. Chicago: University of Chicago Press.CrossRefGoogle Scholar
Lakoff, G., & Johnson, M. (1980). Metaphors we live by. Chicago: University of Chicago Press.Google Scholar
Louwerse, M., & Hutchinson, S. (2012). Neurological evidence linguistic processes precede perceptual simulation in conceptual processing. Frontiers in Psychology, 3, 385.Google ScholarPubMed
McRae, K., & Boisvert, S. (1998). Automatic semantic similarity priming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 558572.Google Scholar
McRae, K., Cree, G. S., Seidenberg, M. S., & McNorgan, C. (2005). Semantic feature production norms for a large set of living and nonliving things. Behavior Research Methods, 37, 547559.CrossRefGoogle ScholarPubMed
McRae, K., Cree, G. S., Westmacott, R., & de Sa, V. R. (1999). Further evidence for feature correlations in semantic memory. Canadian Journal of Experimental Psychology: Special Issue on Models of Word Recognition, 53, 360373.CrossRefGoogle ScholarPubMed
Moore, C., & Price, C. (1999). Three distinct posterior basal temporal lobe regions for reading and object naming. Neuroimage, 10, 181192.CrossRefGoogle Scholar
Ortiz, M. J. (2011). Primary metaphors and monomodal visual metaphors. Journal of Pragmatics, 43, 15681580.CrossRefGoogle Scholar
Paivio, A. (1971). Imagery and verbal processes. New York: Holt, Rinehart, and Winston.Google Scholar
Paivio, A. (1986). Mental representations: a dual coding approach. New York: Oxford University Press.Google Scholar
Paivio, A. (2010). Dual coding theory and the mental lexicon. The Mental Lexicon, 5, 205230.CrossRefGoogle Scholar
Pérez Hernández, L. (2014). Cognitive grounding for cross-cultural commercial communication. Cognitive Linguistics, 25(2), 203247.CrossRefGoogle Scholar
Pexman, P., Holyk, G., & Monfils, M. (2003). Number of features effects and semantic processing. Memory & Cognition, 31, 842855.CrossRefGoogle ScholarPubMed
Pexman, P. M., Lupker, S. J., & Hino, Y. (2002). The impact of feedback semantics in visual word recognition: number of features effects in lexical decision and naming tasks. Psychonomic Bulletin & Review, 9, 542549.CrossRefGoogle ScholarPubMed
Phillips, B., & McQuarrie, E. (2004). Beyond visual metaphor: a new typology of visual rhetoric in advertising. Marketing Theory, 4, 113136.CrossRefGoogle Scholar
Randall, B., Moss, H., Rodd, J., Greer, M., & Tyler, L. (2004). Distinctiveness and correlation in conceptual structure: behavioral and computational studies. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 393406.Google ScholarPubMed
Recchia, G., & Jones, M. (2011). The semantic richness of abstract concepts. Frontiers in Human Neuroscience, 6, Article 315.Google Scholar
Reinholz, J., & Pollmann, S. (2005). Differential activation of object-selective visual areas by passive viewing of pictures and words. Cognitive Brain Research, 24, 702714.CrossRefGoogle ScholarPubMed
Shelton, J. R., & Caramazza, A. (1999). Deficits in lexical and semantic processing: implications for models of normal language. Psychonomic Bulletin & Review, 6, 527.CrossRefGoogle ScholarPubMed
Simmons, W., Hamann, S., Harenski, C., Hu, X., & Barsalou, L. (2008). fMRI evidence for word association and situated simulation in conceptual processing. Journal of Physiology Paris, 102, 106119.CrossRefGoogle ScholarPubMed
Steen, G. (2008) The paradox of metaphor: why we need a three-dimensional model of metaphor. Metaphor and Symbol, 23(4), 213241.CrossRefGoogle Scholar
Steen, G., Dorst, L., Herrmann, B., Kaal, A., Krennmayr, T., & Pasma, T. (2010). A method for linguistic metaphor identification: from MIP to MIPVU. Amsterdam: John Benjamins.CrossRefGoogle Scholar
Tversky, A. (1977). Features of similarity. Psychological Review, 84, 327352.CrossRefGoogle Scholar
Vinson, D., & Vigliocco, G. (2008). Semantic feature production norms for a large set of objects and events. Behavior Research Methods, 40, 183190.CrossRefGoogle ScholarPubMed
Warrington, E. (1985). Agnosia: the impairment of object recognition. In Frederiks, J. (Ed.), Handbook of clinical neurology (pp. 333349). New York: Elsevier.Google Scholar
Wu, L., & Barsalou, L. (2009). Perceptual simulation in conceptual combination: evidence from property generation. Acta Psychologica, 132, 173189.Google ScholarPubMed
Yu, N. (2008). Multimodal manifestation of conceptual metaphors in multimedia communication. Intercultural Communication Studies, 17(1), 7989.Google Scholar
Figure 0

Fig. 1. METsf features types, according to the macro-categories described in Wu and Barsalou’s (2009) taxonomy.

Figure 1

Fig. 2. METsf features types, according to the nested categories described in Wu and Barsalou’s (2009) taxonomy.

Figure 2

table 1. The average Pearson’s correlation coefficients between concepts that are shared across the databases of semantic features

Figure 3

table 2. Average cosines between metaphorical domains in verbal and visual metaphors