Gabriele's answer looks pretty good to me.
However, if you find yourself in a situation where you have a huge amount of data on many buildings to the extent that you can store any lowercase lines that are in memory, but not all, then I would be inclined to use a slightly different approach.
In this example, I use the components of the MySQL database only because I have a local MySQL database, but everything about this task is true for Oracle or MS SQL Server:

At the very beginning, we open a database connection using the tMySqlConnection component in this case. The remaining 2 database components (tMySqlInput and tMySqlRow) then use general connection information.
We start by capturing the list of buildings in the database using the following query in tMySqlInput:
"SELECT DISTINCT building FROM filesplittest"
This returns each individual building.
Then we sort through all the buildings, which allows us to store only the records for this particular building in memory for the rest of the job.
Then we use the tMySqlRow component to pull the data for this particular building of the iteration using a prepared statement. An example request that I use looks like this:
"SELECT building, foo, bar FROM FileSplitTest WHERE building = ?"
Then we configure the prepared statement in the advanced settings:

Where I said that the first parameter (Parameter Index = 1) is the building value that we extracted earlier, and tFlowToIterate usefully clicked on globalMap for us, so we extract it from there using ((String)globalMap.get("row6.building")) in this case (this is the" building "column that was in the row6 stream).
When using a prepared statement, you need to get the data as a record set object, so you want to set the tMySqlRow schema as follows:

And then we analyze it using the tParseRecordSet component:

With a circuit matching this example:

Then we need to iterate over this dataset, adding it to the corresponding CSV. To do this, we use another component tFlowToIterate and slightly annoy the detour through the tFixedFlowInput component to read the record data from the global map before passing it to tFileOutputDelimited:

Finally, we add it to the CSV, named after the building:

Please note that the append flag is set, otherwise each iteration of the job will overwrite the previous one. We also call the file the value in the building column.
As Gabriele said: if your data fits well in memory at any time, you can simplify the work by simply reading your data in the tHashOutput component and then filtering the data in a hash:

We start by reading all the data into the tHashOutput component, which then stores the data in memory throughout the job. Talend sometimes hides these components for some odd reason, but you can turn them on again by adding them back to the Project Properties → Designer → Palette settings:

Next, we read the data from the hash using the tHashInput component (associated with the previous tHashOutput component - do not forget to add the same scheme to the tHashInput component), and then use the component and the tAggregateRow group, creating "to effectively accept non-building values:

Then we iterate over the different values for the “build” using tFlowToIterate, and then filter the hash (being read a second time) by the value of the building that is currently being iterated:

And finally, we’ll once again add a file with a name after the value in the building column:
<T411>