Gender-neutral language.

This is related to TDE/tde#93.

Signed-off-by: Mavridis Philippe <mavridisf@gmail.com>
pull/37/head
Mavridis Philippe 2 years ago
parent 69ce1d06cc
commit 30c0048440
No known key found for this signature in database
GPG Key ID: F8D2D7E2F989A494

@ -25,28 +25,28 @@ for composing a song, you might require
- several audio tracks
- a mixer
While with artscontrol, the user can setup much of this himself manually, the
problem is that this has to be done over and over again. That is, if he saves
the song, the settings of his effects, instruments and the mixer will not be
saved with it.
While with artscontrol, the users can setup much of this themselves manually,
the problem is that this has to be done over and over again. That is, if they
save the song, the settings of their effects, instruments and the mixer will
not be saved with it.
The main idea of the new interfaces in Arts::Environment is that the sequencer
can save the environment required to create a song along with the the song, so
that the user will find himself surrounded by the same effects, instruments,...
with the same settings again, once he loads the song again.
So, conceptually, we can imagine the environment as a "room", where the user
works in to create a song. He needs to install the things inside the room he
needs. Initially, the room will be empty. Now, the user things: oh, I am going
to need this nice 24 channel mixer. *plop* - it appears in the room. Now he
thinks I need some sampler which can play my piano. *plop* - it appears in
the room.
Now he starts working, and adds the "items" he needs. Finally, if he stops
working on the song, he can pack all what is in the environment in a little
box, and whenever he starts working on the song again, he can start where he
left off. He can even take the environment to a friend, and continue working
on the song there.
that the users will find themselves surrounded by the same effects,
instruments,... with the same settings again, once they load the song again.
So, conceptually, we can imagine the environment as a "room", where a user
works in to create a song. They needs to install the things inside the room
they need. Initially, the room will be empty. Now, the user thinks: oh, I am
going to need this nice 24 channel mixer. *plop* - it appears in the room.
Now they think: I need some sampler which can play my piano. *plop* - it
appears in the room.
Now they starts working, adding the "items" they need. Finally, if they stop
working on the song, they can pack all what is in the environment in a little
box, and whenever they starts working on the song again, they can start where
they left off. They can even take the environment to a friend, and continue
working on the song there.
Note that there might be other tasks (such as creating a film, playing an
mp3 with noatun,...) which will have similar requirements of saving the

@ -339,7 +339,7 @@ Finally, you can delete the Synth&lowbar;SEQUENCE module, and rather
connect connect the frequency input port of the structure to the
Synth&lowbar;FREQUENCY frequency port. Hm. But what do do about
pos?</para> <para>We don't have this, because with no algorithm in the
world, you can predict when the user will release the note he just
world, you can predict when the user will release the note they just
pressed on the midi keyboard. So we rather have a pressed parameter
instead that just indicates wether the user still holds down the
key. (pressed = 1: key still hold down, pressed = 0: key

@ -1662,8 +1662,8 @@ objects that are send over wire are tagged before transfer.
</para>
<para>
If the receiver receives an object which is on his server, of course he
will not <function>_useRemote()</function> it. For this special case,
If the receiver receives an object which is on their server, of course
they will not <function>_useRemote()</function> it. For this special case,
<function>_cancelCopyRemote()</function> exists to remove the tag
manually. Other than that, there is also timer based tag removal, if
tagging was done, but the receiver didn't really get the object (due to

@ -1476,7 +1476,7 @@ Marshalling should be easy to implement.
<listitem>
<para>
Demarshalling requires the receiver to know what type he wants to
Demarshalling requires the receiver to know what type they want to
demarshall.
</para>
</listitem>
@ -2200,9 +2200,9 @@ it is used in daily &kde; usage: people send types like
<classname>QString</classname>, <classname>QRect</classname>,
<classname>QPixmap</classname>, <classname>QCString</classname>, ...,
around. These use &Qt;-serialization. So if somebody choose to support
&DCOP; in a GNOME program, he would either have to claim to use
<classname>QString</classname>,... types (although he doesn't do so),
and emulate the way &Qt; does the streaming, or he would send other
&DCOP; in a GNOME program, they would either have to claim to use
<classname>QString</classname>,... types (although they don't do so),
and emulate the way &Qt; does the streaming, or they would send other
string, pixmap and rect types around, and thus not be interoperable.
</para>

@ -368,7 +368,7 @@
<string>Add &amp;track information</string>
</property>
<property name="whatsThis" stdset="0">
<string>Add a description of the song to the file header. This makes it easy for the user to get advanced song information shown by his media player. You can get this information automatically via the Internet. Look at the &lt;i&gt;"CDDB Retrieval"&lt;/i&gt; control module for details.</string>
<string>Add a description of the song to the file header. This makes it easy for the user to get advanced song information shown by their media player. You can get this information automatically via the Internet. Look at the &lt;i&gt;"CDDB Retrieval"&lt;/i&gt; control module for details.</string>
</property>
</widget>
</vbox>

Loading…
Cancel
Save