What makes rhythm
Therefore, these conditions were not used in Experiment 2. The procedure and statistical analysis were identical to Experiment 1. In Experiment 2, a total of ratings were made. Fig 2A shows the distribution of the data for Experiment 2 and Fig 2B shows the normalization obtained with the ordinal regression. The estimated normalized difficulty ratings for each condition are shown in Fig 5 and the results of the ordinal regression can be found in Table 2.
We consider effects that replicate over both experiments, and thus appear reliable, as the most interesting. Therefore, Fig 6 depicts the interactions we found in both experiments. As in Fig 4 , for the chart on the right, depicting the interaction between musical training and beats missing, participants were divided into two groups for ease of visualization.
Similar to Experiment 1, we found main effects of beats missing and type see Table 2 for test results. The main effect of musical training did not reach significance. As in Experiment 1, these main effects were hard to interpret in the light of several significant interactions that were present. A small but significant interaction was observed between missing beats and type see Table 2 and Fig 6.
As in Experiment 1, the interaction between the simple contrast and type was not significant. Thus, participants were more sensitive to the number of accents on the beat in temporal than intensity rhythms, but only when differentiating between rhythms with some degree of syncopation e.
With more beats missing, rhythms were rated as more difficult to perceive a beat in and this effect was larger for temporal then intensity rhythms.
Participants differentiated between rhythms with no beats missing e. A three-way interaction was observed between missing beats, accents off the beat and musical training see Table 2.
Musical novices rated rhythms with some accents off the beat as slightly more difficult than those with few accents off the beat regardless of the number of beats missing. However, musical experts rated rhythms with some accents off the beat as more difficult than those with few accents off the beat only when one beat was missing, but not when two or three beats were missing. Although this three-way interaction was significant, its effect size was very small and this interaction was not found in Experiment 1.
Thus, the practical use and reliability of this effect is questionable. Therefore, the much larger two-way interaction between missing beats and musical training see Table 2 and Fig 6 , which was found in Experiment 1 and replicated in Experiment 2, is of more interest. Contrary to Experiment 1, in Experiment 2 the interaction between type and musical training did not reach significance. Finally, to obtain some estimate of the validity of our experiments, we calculated how much variance our model explained for both experiments, correcting for the systematic differences between users the random intercept in the models.
In Experiment 1, the proportion of variance explained by our model was 0. In this study we explored how different types of accents in musical rhythm influence the ease with which listeners with varying musical expertise infer a beat from a rhythm. Both in Experiment 1 and Experiment 2, musical training increased the sensitivity of participants to the number of accents on the beat. Participants rated rhythms with fewer missing beats i.
Musical training enhanced this effect. Contrary to our expectations, this greater sensitivity in musical experts was not selective to temporal rhythms, but also existed for intensity rhythms. Although musical training is not thought to be necessary for beat perception to develop [ 9 , 20 ], training does seem to affect how a listener processes the structure of accents that indicates where the beat is.
In many previous studies using stimuli designed after [ 30 ], the effect of musical training on the detection of a beat was not reported [ 18 , 30 , 65 ] or only musicians were tested [ 19 ].
Grahn and Brett [ 17 ] did examine the effect of musical training on the detection of a beat in temporal rhythms and did not find significant differences between musicians and non-musicians. However, they used a discrimination task, which implicitly probed beat perception. In a similar study, in which participants rated beat presence, differences were found between musicians and non-musicians [ 42 ].
That rating task strongly resembled the current task, as it required an explicit rating. Thus, musical novices may be capable of detecting a beat just as well as musical experts but may have less explicit access to the information required to make a rating of beat presence.
In line with this, other work has shown that musical training enhances beat perception only when people attend to rhythm, but not when they ignore it [ 21 ]. As such, some aspects of beat perception may be more automatic, and independent of musical training, while aspects of beat perception that are related to attention and awareness may be enhanced by training.
Future studies could examine potential differences between beat perception and beat awareness in musical novices and experts.
The current experiments suggest that musical experts are more able to use the regular structure of accents to infer a beat. This is in line with the finding that musical experts are more able than musical novices to use not only negative evidence, but also positive evidence, to infer metrical structure [ 55 ]. Musically trained listeners likely also have had more exposure to music than musical novices.
They may therefore have stronger a priori expectations for duple metrical structures, as were used in the current experiments.
Thus, the mis match between accents in the rhythm and the perceived metrical structure may have been larger for musically trained than untrained participants. In both Experiment 1 and 2, participants were more sensitive to the number of accents on the beat in temporal than in intensity rhythms. The effect size of this interaction was extremely small, which warrants some caution in interpreting its practical use. Nonetheless, the interaction was highly significant in both experiments, with independent participants, and as such, seems reliable.
The greater the number of beats missing in a rhythm, the more difficulty participants reported in finding a beat. This effect was larger for temporal than intensity rhythms. However, the results from the planned contrasts suggest that the effect of missing beats on the ratings was not just quantitatively but also qualitatively different for temporal and intensity rhythms.
While the interaction between beats missing and type was significant for rhythms with one or more beats missing as tested with the polynomial contrast , participants differentiated between rhythms with no beats missing and rhythms with one or more beats missing as tested with the simple contrast equally well for both types of rhythms.
Thus, listeners did differentiate between intensity rhythms that were strictly metric e. This may indicate that the Povel and Essens model [ 30 ] cannot be translated completely to rhythms with intensity accents. As these types of accents are commonly used in real music, studies of beat perception with only temporal rhythms may not provide a full picture of the mechanisms of beat perception in music.
Grahn and Rowe [ 42 ] found that the brain networks involved in beat perception differed between intensity rhythms and temporal rhythms, and in the current study responses to the two types of rhythms were qualitatively different. More research is needed to understand how a beat is induced by music, where acoustic information as well as temporal cues are important. In Experiment 1, musical novices, as expected, rated temporal rhythms as more difficult than intensity rhythms.
This effect was generalized over all rhythms and was not modified by the amount of counterevidence. Musicians are more sensitive to the grouping rules that indicate temporal accents than non-musicians [ 66 ]. Thus, those with little musical training may have found it more difficult to extract information from the temporal rhythms than those with more musical training.
In addition, musical novices attend more to lower faster levels of regularity in a metrical structure than musical experts [ 48 , 49 ]. In the intensity rhythms, all subdivisions of the beat contained a sound, creating an explicit isochronous pattern at a faster rate than the beat. Participants with little musical training may have focused on this lower level of regularity in judging how easy it was to hear a beat and may have ignored the accents altogether at the hierarchically higher level of the beat, whereas participants with more extended musical training may have been more attuned to events at all levels of the metrical hierarchy.
The interaction between type and musical training, however, was absent in Experiment 2, and must thus be interpreted with caution. In the more restricted set of rhythms used in Experiment 2, the variability in the temporal rhythms was less than in Experiment 1, as we controlled for event density and the distribution of the temporal intervals used.
The temporal rhythms in Experiment 2 were therefore more similar to each other than in Experiment 1, and this may have allowed participants to learn to recognize the intervals that were used. This may have made it generally easier for the musical novices to understand the grouping structure of the rhythm and may have therefore eliminated the difference between the two types of rhythms. It must also be noted that the sample of participants may have differed between the two experiments in ways that we cannot know.
Generally, participants in Experiment 2 reported fewer years of musical training than in Experiment 1. However, it may be that they had superior beat perception abilities because of more exposure to musical rhythm [ 53 ] or innate ability [ 46 ]. The effects of accents off the beat were not consistent over the two experiments, with a main effect in Experiment 1 and an interaction between accents off the beat, missing beats and musical training in Experiment 2.
In both experiments, the effect sizes for the influence of accents off the beat were extremely small. This is in line with Dynamic Attending Theory, which predicts more attentional resources on the beat and less detailed processing off the beat [ 33 ]. However, the weak results for counterevidence off the beat may also have been due to the design of the experiment.
The difficulty ratings made by musical experts for temporal rhythms do show a numerical trend in the expected direction, with higher difficulty ratings for rhythms with more counterevidence off the beat. This effect weakens when rhythms become very complex e. The effects of accents off the beat thus seem to be present only for musical experts, and only for rhythms with little counterevidence on the beat, hence the three-way interaction between accents off the beat, missing beats and musical training in Experiment 2.
As the effect of accents off the beat thus is present only in a small subset of the total rhythms only in 2 of the 8 conditions used in Experiment 2, and only for musically trained participants , the experiments may have lacked the power to detect the effects of counterevidence off the beat consistently. The lack of an effect of accents off the beat in musical novices and for rhythms with many beats missing can be explained in two ways. First, it is possible that listeners do not differentiate between rhythms once it becomes too difficult to infer a beat.
Thus, when three beats are missing, no beat is induced, and any further counterevidence created by accents off the beat cannot reduce beat induction any further. This ceiling effect may also explain the slight curvature in the effect of missing beats. While the difference between no counterevidence at all and some counterevidence is large, once it becomes harder to infer a beat, it does not matter whether more counterevidence is added. A second explanation for the weak effects we found for accents off the beat may be that instead of perceiving a rhythm as more complex, people may shift the phase of the beat when too much counterevidence is present.
While the Povel and Essens model [ 30 ] always regards rhythms as a whole entity, in reality, the perception of a beat unfolds over time [ 5 ]. In rhythms with a lot of counterevidence i. In such very complex rhythms it is possible that no beat was detected at all. By locally phase-shifting the perceived beat, or by changing the perceived period, a listener could find a new beat and make the rhythm appear less complex.
This may have been easier for musical experts than musical novices. While the effects of accents off the beat were extremely small in our study, the possibility of local phase shifts may be worth considering in stimulus design. If only the number of missing beats is taken into account, beat perception in rhythms that are regarded as very complex cf.
A general and important challenge for future models of beat perception is to account for its inherently temporal nature to approximate human listening. Two caveats in our stimulus design must be noted. First, the difference between temporal and intensity rhythms in our study can be characterized not only by the nature of the accents, but also by the presence of marked subdivisions in the rhythms.
In the intensity rhythms all subdivisions of the beat contained a sound, while in the temporal rhythms some subdivisions were silent. When all subdivisions are marked, which is often the case in real music, people may rely less on accents indicating the beat and instead may infer a duple metrical structure from the isochronous subdivisions themselves cf. This may explain why the effects of counterevidence in the current study were larger for temporal than intensity rhythms.
One way of resolving this issue is by filling all silences in the temporal rhythms with sounds that are softer than the events that indicate the rhythmic pattern. Previously, Kung et al. It is not clear whether the extraction of accents from temporal rhythms as proposed by [ 30 ] and used in the current experiment is the same as when all subdivisions are marked.
This issue may be addressed in future research. Second, we did not equate the different types of accents in terms of salience. However, it has been proposed that the subjective accents perceived in temporal patterns have an imagined magnitude of around 4 dB [ 4 ]. The physically present accents in the intensity rhythms were much larger 8.
Nonetheless, participants were more sensitive to the structure of the accents in the temporal rhythms than in the intensity rhythms. Thus, a discrepancy in salience between temporal and intensity accents would have led to an underestimation of this effect and is unlikely to have caused the effect. For all the effects we report, effects sizes were small. In addition, the total amount of variance our model explains is also arguably small. This casts some doubt on the practical use of the model of how accents influence beat perception, and indicates that a large part beat perception is influenced by other factors.
Some of these limitations may be inherent in the design of the study. Perhaps most importantly, as noted before, our model like many does not account for the fact that beat perception unfolds over time.
Finally, as we used a web-based study, we did not have experimental control over possible strategies people may have used, and we cannot be certain that they adhered to the instruction not to move.
These limitations suggest some caution in interpreting our results. However, it is reassuring that several findings were reliably replicated, even with an arguably simplified model of beat perception—most notably the interactions between accents on the beat and musical training, and accents on the beat and accent type.
These effects may have a larger external validity as compared with effects found in lab-based experiments, precisely because we found them in both experiments despite all real-world variance and uncertainty caused by the Web-based setup [ 70 , 71 ]. In the current study, we explored how the structure of different types of accents in rhythm influences the perception of a regular beat.
Contrary to our expectations, both musical novices and musical experts were more sensitive to the structure of temporal accents than to the structure of intensity accents. As expected, musical training increased the sensitivity to the accent structure.
Interestingly, beat finding in participants without musical training did not seem to be affected by the number of accents on the beat at all. The large effects of musical training on the perception of the beat may suggest that the use of stimuli with temporal accents in which the complexity is manipulated by varying the number of missing beats, as is often done, may not be meaningful to musical novices.
The intensity accents as implemented in the current study did not improve beat perception for musical novices. However, a different combination of accents, and the use of statistical regularities in indicating the beat see also [ 21 ] , may be more suited to their beat perception capacities.
The use of non-temporal information in beat perception is not well understood and may be important to better understand this ability. Our experiment provides a starting point in the use of online experiments to study beat perception.
One could extend on this experiment by using a similar setup to obtain data from a larger group of people for example through services like Amazon Turk. Ideally, this could result in a detailed model of how listeners with different backgrounds and experiences deal with different types of accents in rhythm as beat perception unfolds.
Therefore, our experiment can be seen as the beginning of a search for stimulus material that is more ecologically valid, incorporates more musically relevant features, retains experimental control, and tests people varying in musical expertise and cultural background.
Moreover, rhythms with a regular beat have been used in various clinical applications. Better understanding of what is needed for different populations to be able to extract a beat from a rhythm will help in designing more targeted and effective rehabilitation strategies using musical rhythm. The authors would like to thank Conor Wild and Molly Henry for their helpful comments and discussions.
McDonnell Foundation www. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. National Center for Biotechnology Information , U. PLoS One. Published online Jan Fleur L. Ashley Burgoyne. Jessica A. Sonja Kotz, Editor. Author information Article notes Copyright and License information Disclaimer. Competing Interests: The authors have declared that no competing interests exist.
Received Sep 14; Accepted Dec This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. This article has been cited by other articles in PMC. S2 Sound: Temporal rhythm with 1 beat missing and 0 accents off the beat.
S3 Sound: Temporal rhythm with 1 beat missing and 1 accent off the beat. S4 Sound: Temporal rhythm with 1 beat missing and 2 accents off the beat.
S5 Sound: Temporal rhythm with 2 beats missing and 1 accent off the beat. S6 Sound: Temporal rhythm with 2 beats missing and 2 accents off the beat. S7 Sound: Temporal rhythm with 2 beats missing and 3 accents off the beat. S8 Sound: Temporal rhythm with 3 beats missing and 3 accents off the beat. S9 Sound: Temporal rhythm with 3 beats missing and 4 accents off the beat. S10 Sound: Temporal rhythm with 3 beats missing and 5 accents off the beat.
S11 Sound: Intensity rhythm with 0 beats missing and 0 accents off the beat. S12 Sound: Intensity rhythm with 1 beat missing and 0 accents off the beat. S13 Sound: Intensity rhythm with 1 beat missing and 1 accent off the beat. S14 Sound: Intensity rhythm with 1 beat missing and 2 accents off the beat. S15 Sound: Intensity rhythm with 2 beats missing and 1 accent off the beat.
S16 Sound: Intensity rhythm with 2 beats missing and 2 accents off the beat. S17 Sound: Intensity rhythm with 2 beats missing and 3 accents off the beat. S18 Sound: Intensity rhythm with 3 beats missing and 3 accents off the beat. S19 Sound: Intensity rhythm with 3 beats missing and 4 accents off the beat. S20 Sound: Intensity rhythm with 3 beats missing and 5 accents off the beat.
S1 Fig: Example of the interface used during the online experiment. S1 Data: Dataset from both Experiment 1 and Experiment 2. Abstract Perception of a regular beat in music is inferred from different types of accents. Introduction In musical rhythm, we often perceive a regular beat. Experiment 1 Methods Participants The data reported here was retrieved from the online application on February 6, Stimuli We generated all possible rhythms of 9 tones and 7 silences aligned to a grid of 16 positions, with the grid positions representing four beats subdivided into four sixteenth tones see Fig 1.
Open in a separate window. Fig 1. Examples of rhythms for each condition. Table 1 Characteristics of the rhythms used in Experiment 1. Missing beats Accents off the beat Possible 16 grid-point rhythms Number of concatenated 32 grid-point rhythms used in Experiment 1 Description of number of accents off the beat No Temporal Intensity Temporal Intensity 0 0 12 6 24 24 Few 1 1 0 36 6 15 15 Few 2 1 98 18 15 15 Some 3 2 56 6 15 15 Many 4 2 0 11 3 0 0 Not used 1 65 12 15 15 Few 5 2 22 15 15 Some 6 3 16 15 15 Many 7 4 18 3 0 0 Not used 3 2 10 2 0 0 Not used 3 37 8 12 12 Few 8 4 53 12 12 12 Some 9 5 20 6 10 10 Many 10 Total number of rhythms Statistical analysis In total, ratings were made.
Fig 2. Distribution and normalization of ratings. Results Fig 3 depicts the estimated normalized difficulty ratings for each condition. Fig 3. Estimated normalized ratings for all conditions in Experiment 1. Table 2 Results of the ordinal regression in Experiment 1 and Experiment 2. Significant in both Experiment 1 and Experiment 2. Fig 4. Interactions between beats missing and type and between beats missing and musical training in Experiment 1.
Experiment 2 We controlled the rhythms in Experiment 1 for the number of events and accents, and we allowed a maximum of five consecutive events. Methods Participants We retrieved the data for Experiment 2 from the online application on February 6, Stimuli The stimuli were generated in exactly the same way as for Experiment 1, but with two extra constraints on the temporal rhythms: Only rhythms with no more than three consecutive events and only rhythms consisting of five sixteenth notes, two eighth notes, one dotted eighth note and one quarter note were included.
Table 3 Characteristics of the rhythms used in Experiment 2. Missing beats Accents off the beat Possible 16 grid-point rhythms Number of concatenated 32 grid-point rhythms used in Experiment 2 Description of number of accents off the beat No Temporal Intensity Temporal Intensity 0 0 12 6 28 28 Few 1 1 0 16 6 24 24 Few 2 1 10 18 24 24 Some 3 2 0 6 0 0 Many 2 0 10 3 0 0 Not used 1 14 12 18 18 Few 5 2 14 22 18 18 Some 6 3 7 16 18 18 Many 7 4 0 3 0 0 Not used 3 2 4 2 0 0 Not used 3 7 8 10 10 Few 8 4 9 12 8 8 Some 9 5 0 6 0 0 Many Total number of rhythms Procedure and statistical analysis The procedure and statistical analysis were identical to Experiment 1.
Results The estimated normalized difficulty ratings for each condition are shown in Fig 5 and the results of the ordinal regression can be found in Table 2. Fig 5. Estimated normalized ratings for all conditions in Experiment 2.
Fig 6. Interactions between beats missing and type and between beats missing and musical training in Experiment 2. Discussion In this study we explored how different types of accents in musical rhythm influence the ease with which listeners with varying musical expertise infer a beat from a rhythm.
Conclusion In the current study, we explored how the structure of different types of accents in rhythm influences the perception of a regular beat. Supporting information S1 Sound Temporal rhythm with 0 beats missing and 0 accents off the beat.
MP3 Click here for additional data file. S2 Sound Temporal rhythm with 1 beat missing and 0 accents off the beat. S3 Sound Temporal rhythm with 1 beat missing and 1 accent off the beat. S4 Sound Temporal rhythm with 1 beat missing and 2 accents off the beat. S5 Sound Temporal rhythm with 2 beats missing and 1 accent off the beat. S6 Sound Temporal rhythm with 2 beats missing and 2 accents off the beat. S7 Sound Temporal rhythm with 2 beats missing and 3 accents off the beat.
S8 Sound Temporal rhythm with 3 beats missing and 3 accents off the beat. S9 Sound Temporal rhythm with 3 beats missing and 4 accents off the beat. S10 Sound Temporal rhythm with 3 beats missing and 5 accents off the beat. S11 Sound Intensity rhythm with 0 beats missing and 0 accents off the beat. S12 Sound Intensity rhythm with 1 beat missing and 0 accents off the beat.
S13 Sound Intensity rhythm with 1 beat missing and 1 accent off the beat. S14 Sound Intensity rhythm with 1 beat missing and 2 accents off the beat. S15 Sound Intensity rhythm with 2 beats missing and 1 accent off the beat. S16 Sound Intensity rhythm with 2 beats missing and 2 accents off the beat. In western music, the time signature of a song dictates how its pulse is measured in each bar and tempo defines how fast the pulse is.
The pulse is represented by a fraction-like symbol that dictates the number of notes per bar and how each note is counted in terms of halves, quarters or sixteenths. The number four on top says that there are four pulses to one bar, and the number four on the bottom says that these pulses are measured in terms of quarter notes. Within a bar there are strong beats that drive the pulse and there are weak beats that counteract the pulse. The strong-weak, strong-weak-weak concept are part of how duple and triple meter work, and they form the basis for understanding compound and odd time.
If you are interested in using compound time and odd time in your track, you need to understand how beats within any measure are felt in twos or threes. One way to visualize triple and duple meter is to imagine the difference between a rolling triangle and a rolling square with each new revolution being where the strong beat falls. One way to visualize triple and duple meter is to imagine the difference between a rolling triangle and a rolling square.
Simple and compound time dictate whether a measures shorter notes usually eighth notes are divided into groups of either two or three. But once you know how duple and triple meter works and feels you can easily handle any odd time pattern. It can be cut down to either a duple grouping followed by a triple grouping or a triple grouping followed by a duple grouping.
Once you know how duples and triple work in combination with one another you can easily count and feel the rhythm of any time signature. Syncopation in rhythm is when notes are played off the main strong beat pulse of the time signature. But playing a quick note right before a strong beat can also emphasize the off beat, to create a syncopated feeling. To play an off beat syncopated rhythm it always helps to count the off beats as you count through a bar of music.
Rhythm more about togetherness and feeling the groove than it is about knowing how to read sheet music and notation. Jamming with others, listening to what they are playing and communicating with them through sound is an excellent and very fun way to develop your rhythmic sensibilities too.
Skip to primary navigation Skip to main content. Depending on where you live, the symbols are the same, but might be called something very different!
These charts cover the basic notes and rests needed to understand simple rhythms and will help you understand what rhythm is rhythm in music. How long a note is played or held, can also be altered by other symbols such as the fermata. This musical symbol means to hold the note for as long as you want! It is often a chance for a vocalist, especially, to show off for how long they can sustain the note. Tempo simply means how fast or slow the music is performed.
To describe the tempo of the music, you first need to listen for the beat or underlying pulse. Once you can either hear or feel the pulse, you can determine the pace or speed of the music. This will help to better appreciate how and what the rhythm in the music is based on.
There are many Italian Music terms that can be used to describe the tempo of a piece of music. Going from slowest to fastest, some of the more common music terms for tempo are:. In music, the tempo does not always stay the same, it can change. Below are some other terms that can be used to describe the changes in tempo. Overall, the song is performed in a moderato tempo, but at approximately , the songs suddenly slows down.
The tempo then gradually increases accelerando until it finally returns to its original and former speed of moderato a tempo. The musical definition of an ostinato is a repeated musical pattern. There are three main types of ostinatos — melodic, rhythmic and chordal. When discussing the rhythm of a piece of music, the main thing is to concentrate on the rhythmic patterns in the music. Another name for an ostinato is a riff.
A riff is also defined as a repeated musical pattern. The only difference between an ostinato and a riff is that a riff is a repeated musical pattern heard and performed in popular music. Funny story, I teach students from grades 7 to 12 here in Australia. A rhythmic ostinato is a repeated rhythmic pattern. These types of ostinatos can be performed by any instrument, either with or without pitch.
More commonly though, instruments that perform a rhythmic ostinato are those without pitch and can be classified as an idiophone instruments that are hit, shaken or scraped to make a sound , a membranophone instruments with a skin or membranophone or can be classified as part of the Percussion Family.
In African Music, rhythm is a more important feature than melody, harmony and tonality. Watch closely to see the djembes, dun duns, bougarabou and shakere all perform their own ostinato. Try performing each ostinato, then get a group together and see if you can play these intricately woven rhythms in time together.
It is much harder than you think! A time signature is made up of two numbers. Each number has a different meaning, depending on if it is on the top or bottom. The bottom number tells you what TYPE of beat the music is counted in, and the top number tells you how many of those beats are in each bar of the music.
For example, if the two numbers were both a four, the bottom number four means quarter crotchet note beats, and the top number four means that there are four beats in the bar.
The definition of this time signature would be — four quarter crotchet beats per bar. In the charts below you can see the most common time signatures and what they mean. There are a few ways to describe a time signature. If the time signature in the music remains constant, and does not change, then it is music written in an isometric time signature.
It is important to note that the tempo slows down a little during the chorus as well! Watch the clip to hear the multimetric time signature.
An accent in music means a stress or emphasis on the note, chord or passage.
0コメント