In a recent study, the privacy experts at pCloud examined the so-called privacy labels in the App Store to identify the apps that process the most user data. The object of the investigation was not only to find out which apps use information for their own internal purposes, but also those that share their information and data with third-party providers. pCloud is a Swiss-based provider of cloud storage solutions with over 10.5 million users worldwide.

Here, it is understandable that data is collected in order to improve their own app. This includes, for example, the analysis of errors or crashes in order to fix them in updates. This use of data is often in the interest of iPhone and iPad users. It becomes more critical, however, when the companies resell the collected user data in order to finance themselves with it.

52% of apps for iPhone and iPad share information with third-party providers.

The research results have been summarized by pCloud in an overview. You can view the overview here:

Among other things, this overview is about apps that share the collected data with third-party providers. The information includes, for example, purchases, location, contact details, search and browsing history, financial details or health and fitness data – so it is definitely about very sensitive data.

The TOP 3 data octopuses according to pCloud:

1st place goes to ➞ INSTAGRAM

– Instagram collects 79% of personal data, and the app is only sparing with the information in a few categories.

2nd place goes to ➞ FACEBOOK

– At 57%, Facebook shares significantly more than half of all data with third-party providers.

3rd place goes to ➞ LINKEDIN

– LinkedIn shares 50% of its data with third-party providers, which somewhat surprisingly includes user content, which at LinkedIn includes the account holders’ own posts.

Also notable is that 6th and 7th place go to YouTube. In 6th place you find YouTube and in 7th place YouTube Music. It is interesting to note that YouTube’s listener data is also shared with third parties. In total, YouTube shares 43% of its customer data.

In 10th place is eBay, although it is impressive to note here that eBay also shares all of its data about auctioned and purchased items with third-party providers. In total, eBay shares 36% of its customer data with third-party providers.

So users of these apps need not be surprised if, for example, they see ads for potential purchases in other apps again and repetitively flowing in on them. It should always be taken into account that the apps also communicate with each other.

Which apps do not pass on data to third parties

At the other end of the scale of iPhone and iPad apps, however, I also find some positive surprises! These are apps that don’t share any or very few details with third parties, and thus don’t share any data that could be used by third parties for marketing purposes.

Surprisingly, they also include well-known companies, such as.



and ZOOM.

Also in this category are:

Microsoft Teams, Google Classroom, Telegram and the relatively new Clubhouse.

Thus, sharing customer data with third parties does not seem to be a mandatory requirement to establish a profitable business model on the Internet!

Apple’s new Privacy Data Labels

But it’s not just third-party providers that use users’ data; Apple’s own apps also have access to personal information.

By implementing its own privacy data labels, Apple is pursuing a transparency strategy that makes the transfer and use of data to third parties transparent for the user.

Click here for the Apple guidelines:

Apple intends to introduce these transparency guidelines in a few weeks with the delivery of IOS 14.5 for all apps in the App Store. It should come as no surprise that this is currently meeting with fierce resistance – especially from Facebook.


I think that this strategy by Apple is definitely conducive to more effective data protection – in essence, ensuring transparency in the use of personal data.

This should make it easier for users to see and decide how their data is used and distributed, and whether or not they want to use the app in question.

It is at least another small step on the way to ensuring the transparency of the processing of personal data against the economic interest in the use and exploitation of personal data by the large international Internet corporations, which is as unlimited as possible.

What does that actually mean and what is it all about?

The Diem Association, formerly Libra, co-founded by Facebook, is pursuing the mission of developing an Internet of money. A global currency and financial infrastructure for billions of people. The European Central Bank and the Bank of China are also looking at introducing a digital currency to complement cash.

This development raises a host of legal questions. What is money? When is money a currency? Is digital money compatible with Union law as an alternative currency to euro cash?

What is money?

In the advanced civilizations of Asia, coinage made of gold, silver or copper was already being used as a means of payment long before Christ, before the introduction of paper money detached the value of money from its intrinsic value. With the further spread of paper money, a banking system emerged in Germany in which cash as book money was increasingly dematerialized. This led to the development of a cashless payment system. Current developments are now aimed at further increasing digitization and internationalization of the monetary system. Blockchain technology promises to enable faster payments at low fees and thus represents a new level of innovation in the monetary system.

In economic terms, money is an asset that serves as a medium of exchange, a unit of account and a store of value, whereby money is always a means of payment, but means of payment do not always necessarily have to be money.

When is money a currency?

In the abstract, a currency is the monetary system of one or more states, and in concrete terms it is a means of payment recognized by the state and determined by law. Against this background, it becomes clear that bitcoin, for example, is neither money nor currency, since there is already no central issuer. Only if the European Central Bank were to introduce a blockchain-based central bank money – such as an e-euro – would one have money in the legal sense and could represent an additional design form of a currency.

Is digital money compatible with Union law as an alternative currency to euro cash?

The future of payments

Under this definition, the development of private blockchain-based means of payment – such as Facebook’s planned Diem – would not strictly speaking be a currency in the legal sense. However, it has the potential to trigger profound changes in payments. These blockchain-based payment methods are very different in detail. The respective requirements in terms of different scopes and purposes of use differ significantly. For example, Facebook requires a Facebook account to use Diem, which significantly limits its use.

The European Parliament and Council Regulation “on Markets in Cryptoassets, and amending Directive (EU) 2019/1937” published in September 2020 describes a digital finance strategy, legislative proposals on cryptoassets and digital resilience for a competitive EU financial sector. The purpose of this is to ensure that interpretive authority is settled vis-à-vis both regulators and market developments as to how these new means of payment should be classified and valued. This can provide consumers with access to innovative financial products while ensuring consumer protection and financial stability. The regulation addresses issuers of cryptocurrencies, utility tokens, stablecoins as well as e-money tokens. Reference is made to providers such as PayPal or the European Payments Initiative of 16 major European banks, which aim to further develop cashless, digital payments.


The future of money faces an extremely dynamic development that could usher in a turning point in the history of money. In my opinion, it would certainly be advisable for policymakers and the general public to approach these new developments with an open mind and not to block innovations here prematurely due to national egoisms. The future of money has only just begun.

An important step towards increasing trust in political decisions

Since the end of 2020, there has been an agreement to introduce a mandatory transparency register for the European Parliament, the Council of the EU and the European Commission. This agreement goes back to a Proposal the Commission had already submitted in 2016.

On December 15, 2020, the Parliament, the Council and the Commission have now reached a final agreement on an Interinstitutional Agreement (IIA). The official signing and entry into force are planned for spring 2021.

Extension and new participation of the EU Council

The new Transparency Register will be managed by a secretariat in which the three institutions Parliament, Council and Commission will participate on an equal basis. To be registered, the applicants will have to comply with a code of conduct. Here, there was also a consensus to introduce stricter provisions on monitoring and investigations to ensure that effective action is also taken if a lobbyist does not comply with the code of conduct. The removal of registered lobbyists from the register is also defined as a possible sanction.

Mandatory registration of activities

The Transparency Register provides that interest representatives must register if they engage in the following activities:

  • Meeting with significant decision-makers, organizations, and
  • Participate in hearings and briefings, and
  • Seeking access to institutions.

This includes activities that aim to influence decision-making processes or formulations or implementation of policies or legislation at the EU level. Furthermore, stakeholders must explain what interests and objectives they pursue and which clients they represent, as well as providing information about resources used for interest advocacy, especially sources of funding.

Associations and networks of agencies engaged in lobbying can voluntarily register if they choose to do so.

Some activities will remain possible without registration: (for example)

  • spontaneous meetings,
  • legal advice and
  • activities of social partners, political parties, Intergovernmental Relations or Member State authorities.

Provisions for individual institutions

The European Comission: Members of the EU Commission may only meet stakeholders who are listed in the Transparency Register. Information on such meetings is published on the Europa website.

The European Parliament: Here, registration is a requirement for access to its facilities, for presentations at public hearings of parliamentary committees, or for participation in the work of intergroups or other unofficial grouping activities organized in the Parliament.

The EU Council: Again, an entry in the Transparency Register is required to gain access to its facilities, participation in thematic information and stakeholder meetings with the Secretary General and the Director General of the General Secretariat of the Council.


Perhaps this initiative on EU level here would also be an occasion for the governing parties in the Federal Republic of Germany to think about the introduction of a corresponding transparency register in Berlin.

At present lobbyists of over 500 lobby organizations can freely go in and out without any registration or transparency in the Bundestag!

Agreements and rules who when how with whom why access receives keeps the Bundestag administration as before under lock and key: a – as I find – sustainably intolerable condition! The citizens of the Federal Republic of Germany are deserving of more transparency from their government.

Excessive monitoring and performance checks of employees due to the use of products from U.S. IT corporations are increasingly leading to unlawful restrictions on employee rights and violations of applicable data protection regulations in Germany as well.

While Microsoft responded early to concerns about questionable functions in Office 365, authorities objected to Amazon’s use of certain software. Microsoft had added an additional analysis function called “Workspace Analytics” to its “Microsoft 365” software package in an update. This made it possible to calculate a productivity score for individual employees. This value includes, for example, information on how many e-mails or messenger messages the individual employees send each day or how often they save files in the Microsoft Cloud or share these data with external persons. Also technical details, such as the use of slower conventional hard disks instead of the faster SSD. Data on the length of time webcams are activated during video conferences is also recorded here.
However, Microsoft backed down and improved the update accordingly after data protectors intervened. The Productivity Score will then only be available in summarized form at company level, so that it will no longer be possible to draw direct conclusions about individual employees.
Amazon’s reaction, however, is different. The data protection commissioner of Lower Saxony has expressly prohibited Amazon from using controversial monitoring and performance control software.
With the help of the software, every scanning process that employees perform when storing or removing products is automatically transferred to the foremen’s devices and displayed there. This enables them to monitor each individual work step in real time and, for example, to recognize directly if an employee briefly interrupts his usual work rhythm. This comprehensive data is also used to create detailed employee profiles. Amazon sees no problem at all in the use of the performance monitoring software and will not accept the authority’s decision.

In my opinion, this legal opinion does not correspond to the fundamental legal templates of the GDPR. A data protection impact assessment required when using this software according to Art. 35 GDPR would certainly confirm this. After all, the necessity and proportionality of the use of this software in relation to the purpose, the risks to the rights and freedoms of the data subjects must be assessed. This software is thus tantamount to total surveillance, which certainly contradicts the fundamental idea of Article 1 of the German constitutional Law, and thus an essential aspect of the core of the fundamental right to informational self-determination.

on security and liability of the EU Commission to the European Parliament, the Council and the European Economic and Social Committee of 19.2.2020

This report was published together with the White Paper on Artificial Intelligence – a European concept for excellence and trust – by the EU Commission on 19.2.2020. This report analyses the relevant current legal framework in the EU. It examines where there are uncertainties regarding the application of this legal framework due to the specific risks posed by AI systems and other technologies. The report concludes that current product safety legislation already supports an extended approach to protect against all types of risks posed by the product depending on its use. However, in order to provide greater legal certainty, provisions could be included that explicitly address newer risks associated with the new digital technologies. In summary, the report could be said to provide an outlook on the expected legal regulations at EU level for the next few years in the field of AI systems and there in particular with regard to the associated security and liability issues. Here, the report distinguishes between two main areas of regulation, product safety regulations and questions regarding the existing liability frameworks for digital technologies.

  1. Product safety regulations
    While the report concludes that current product safety legislation already supports an expanded concept of protection against all types of risks posed by a product depending on its use, it is not clear how this is to be achieved. However, to create greater legal certainty, provisions could be included which explicitly address new risks related to the new digital technologies.
    1. The autonomous behaviors of certain AI systems during their life cycle can lead to significant security-related changes in products, which may require a new risk assessment. In addition, it may be necessary to provide for human control from the design phase onwards throughout the life cycle of AI products and systems as a protective measure.
    2. Explicit obligations for manufacturers could also be considered, where appropriate, in relation to mental safety risks to users (for example, when working with humanoid robots).
    3. EU-wide product safety legislation could include both specific requirements to address the safety risks posed by incorrect data at the design stage and mechanisms to ensure that the quality of data is maintained throughout the use of AI products and systems.
    4. The issue of opacity of algorithm-based systems – the possibility of self-directed learning and self-directed performance improvement of some AI products – could be addressed by setting transparency requirements.
    5. In the case of stand-alone software that is marketed as such or downloaded into a product after it has been marketed, existing requirements may need to be adapted and clarified if the software has safety implications.
    6. Given the increasing complexity of supply chains in new technologies, provisions making cooperation between economic operators in the supply chain and users mandatory could also contribute to legal certainty.

  2. Liability regulations
    The characteristics of new digital technologies such as AI may challenge certain aspects of existing liability frameworks and reduce their effectiveness. Some of these features may make it difficult to trace the damage back to an individual, which would be required under most national rules to make fault claims. This could significantly increase costs for the injured party and make it difficult to pursue or prove liability claims against actors other than producers.
    1. Persons who must have suffered damage because of the use of AI systems will enjoy the same level of protection as persons who have been harmed by other technologies. At the same time, there must be enough room for further development of technological innovation.
    2. All options envisaged to achieve this objective – including possible amendment of the Product Liability Directive and possible further targeted harmonization of national liability laws – should be carefully considered. For example, the Commission invites comments on whether and to what extent it might be necessary to mitigate the consequences of complexity by changing the rules on the burden of proof for damages caused by the operation of AI applications as provided for in national rules of conduct.
    3. In the light of the above comments on the liability framework, the Commission concludes that, in addition to the possible adaptation of this existing legislation, new legislation specifically targeted at AI may be necessary to adapt the EU legal framework to current and expected technological and commercial developments.

    The White Paper identifies the following areas as possible additional regulatory points:
    • A clear legal definition of AI
      A risk-based approach should be taken here, i.e. there should be AI applications with high or low risk. Here, regulatory efforts should be concentrated on those applications with high risk, so as not to cause disproportionately high costs for SMEs. Criteria for the risk class should be the question whether the AI application is used in a sector where, due to the nature of the typical activities, significant risks are to be expected. The second criterion is whether the AI application is used in a sector in which significant risks are to be expected.
    • Key features
      The requirements for high-risk AI applications can relate to the following key features: Training data, data and record retention, information to be presented, robustness and accuracy, human oversight, special requirements for certain AI applications, for example, remote biometric identification applications.
    • Addressees
      Many actors are involved in the life cycle of an AI system. These include the developer, the operator, and possibly other actors such as manufacturer, dealer, importer, service provider, professional or private user. The Commission believes that in a future legal framework, the individual obligations should be the responsibility of the actor(s) best able to manage potential risks. For example, AI developers may be best placed to manage the risks arising from the development phase, while their ability to control risks in the exploitation phase may be more limited. The Commission considers it essential that the requirements apply to all relevant economic operators offering AI-based products or services in the EU, whether they are established in the EU or not.
    • Compliance and enforcement
      Given the high risk that certain AI applications represent overall, the Commission considers at this stage that an objective ex-ante conformity assessment would be necessary to verify and ensure that certain of the above-mentioned mandatory requirements for high risk applications are met. An ex-ante conformity assessment could include procedures for testing, inspection, or certification. This could include a review of the algorithms and data sets used in the development phase.

      a) Governance
      A European governance structure for AI, in the form of a framework for cooperation between the competent national authorities, is necessary to avoid fragmentation of responsibilities, to strengthen the capacities in the Member States and to ensure that Europe gradually equips itself with the capacities needed for the testing and certification of AI-based products and services
  1. Conclusion
    Even though the considerations made by the EU Commission in the White Paper and in the report on the impact of artificial intelligence on the adaptation of the legal nationally different existing regulations regarding artificial intelligence are still at a very unspecific stage and still in the middle of the political discussion, the following can be stated
    1. With an adapted or supplementary legal regulation on EU level regarding the questions of product safety (i.e. market access requirements) as well as regarding the reorganization of liability issues in connection with AI systems, it can be assumed with some certainty that this will happen in the course of the next few years.
    2. Especially AI vendors should be prepared for the fact that the algorithm must be transparent, verifiable, and finally meet certain certification requirements. In addition, an extended liability and thus responsibility of the AI provider that goes beyond the known extent of product liability, for example with regard to responsibility for supply chains and complex products, is certainly to be expected. As a result, this will only be associated with changed, more transparent development processes and extended responsibility, i.e. considerably higher costs for the corresponding insurance cover.

On December 9, 2020, the EU Commission intends to announce a series of new planned competition and antitrust regulations to improve the control of technology groups, particularly the major Internet platforms.

In a report published on November 19, 2020, the EU Court of Auditors also urges the improvement of corresponding EU regulations. In particular, the report criticizes the fact that two antitrust proceedings against Internet platforms already exist under current law, but that enforcement here leaves much to be desired.

Although the EU Commission has opened antitrust proceedings against Google, these have been still pending before the European Court of Justice for more than three years without a decision.

Already a few weeks ago, the contents of the planned new EU regulations – the so-called Digital Services Act – were leaked, so I would like to give you a short list of the intended regulations in the following

1. Exclusive use of data

Large online platforms could be banned under the EU Digital Services Act from using collected User Data if it is not made available to smaller platforms. The activities of so-called “gatekeeper” platforms such as Google, Amazon and Facebook will be discussed in particular. These large corporations have a disproportionately high degree of economic power and control over the online world and can therefore participate in deciding “at the gate” who may enter the market.

According to the new regulation, gatekeepers are only allowed to use data

  • which is produced on the platform itself
  • or which are generated and collected on other services of the donors

for their own commercial purposes if it is also made available to other commercial users.

2. Rankings

Furthermore, online search engines are to be prohibited from displaying their own services preferentially and in an exposed position. This regulation represents a considerable tightening of the previous EU regulation from July 2019. In this regulation, search engines were only obliged to make it clear and transparent if they give preference to their own products and services.

3. Freedom of choice and Pre-installation

Equally, e-commerce giants will be prohibited in the future from restricting the ability of business users to offer the same goods and services to consumers under different conditions through other online intermediary services. It will also be prohibited for large companies to pre-install only their own apps on hardware systems. It must also be possible for consumers to uninstall applications that have already been pre-installed by the manufacturer.

4. Einführung einer sogenannten „Grauen Liste“

Furthermore, the EU Commission intends to introduce a so-called “grey list” of activities that the executive considers unfair and which may therefore require increased supervision by a competent authority in the future. According to this list, the platform giants are not allowed to prevent third parties from accessing essential information about customers and are instructed not to collect personal data beyond what is necessary to provide their services.

It ultimately remains to be seen in what concrete form the EU Commission will now present these regulations in December 2020. However, it can certainly be expected that intensive lobbying by the major Internet groups, as has already happened several times in the past, will result in some changes and public discussion before the final adoption. It remains – as so often in life – thus further exciting.

What is the reason behind this?

In a judgement, the German Federal Supreme Court confirmed the imposition of a fine by the German Federal Cartel Office against Facebook. Although the judgement of the German Federal Supreme Court was issued more than two months ago, I would like to take up the issue again here and draw attention to two problems that are made clear in this decision.


First, it should be noted that this decision will not have any legal consequences for everyday life and behaviour on and with Facebook at the moment.

This is simply due to the fact that the judgement is a summary proceeding. In these summary proceedings, the court decided on a fine imposed on Facebook by the German Federal Cartel Office in 2019.

Here is the first point of criticism: the length of time the summary proceeding lasted!

Before the fine was imposed, the Federal Cartel Office had already investigated for 3 years. This so-called summary proceeding took more than 4 years until it became legally binding!

This appears to be extremely problematic, especially in disputes related to the digital economy, because economic power in digital markets establishes itself quickly.

Facebook, on the other hand, now has the opportunity to have the decision on the main proceedings reviewed intensively once again. It is possible that the German Federal Supreme Court will also seek an opinion from the European Court of Justice in the proceedings, which would mean years of main lawsuits, without any legally binding final decision.

Here the legislator is undoubtedly called upon to ensure effective legal protection!


The second important aspect of this decision is that the German Federal Supreme Court did not adopt the substantive justification of Facebook’s dominant position – as in the original decision of the Federal Cartel Office. The German Federal Supreme Court has NOT based its decision on a violation of the German Data Protection Act as a violation of antitrust law but has instead classified Facebook’s terms and conditions as questionable under antitrust law.

In doing so, the German Federal Supreme Court avoids deciding on the question of whether a violation of the European Data Protection Act can in principle constitute a violation of antitrust law. Sooner or later, the German Federal Supreme Court will not be able to ignore a statement on this question. Because the essential point of the problem is that the European Data Protection Act does not want to protect private personal data against an excessively encroaching state, but against the economic interests of the internationally very well positioned Internet platforms! Their business model consists precisely in generating sales through the intelligent use of personal data.

In conclusion, we can say:

Since we are only at the beginning of the intensive highest court clarification of legal questions on the application and scope of the European Data Protection Act, the German Federal Supreme Court will sooner or later have to take a clear position on this issue! In essence, the question is whether the field of protection of fundamental rights, which is part of public constitutional law, also applies directly to legal relations with Internet platforms in the private sector under civil law.

In its latest decision the European Court of Justice declares the Privacy-Shield-Agreement to be ineffective. Essentially, it justifies this on the basis of US security laws, which grant the authorities extensive access to data of EU citizens without significant restrictions and without judicial control being possible.

At the same time, the European Court of Justice also decided on the standard data protection clauses by which a data importer in a third country gives a contractual assurance to a European company that data transmitted to it will be processed in accordance with EU data protection standards.

In principle, these standards should continue to apply, as long as the laws of the destination country allow the data recipient to comply with these data protection clauses. Since companies in the USA are legally obliged to make their data available to state authorities on a large scale, the European data protection authorities are obliged to suspend or prohibit the transfer of data based on these data protection clauses in such countries.

This has a major practical impact on the international exchange of data!

Data transfers to the USA are now in breach of data protection laws if they are made exclusively on the basis of a Privacy Shield certification. This covers not only transfers to contract processors, i.e. Cloud Service Providers, but also those within a group or to business partners for whom at least part of the data processing is performed in the USA.

The use of software tools where at least part of the data processing takes place in the USA as well as the internal data flows to US Group companies have to be checked.

The European Court of Justice indicates that this is not an adequate level of protection in the USA due to the uncontrolled monitoring powers of the security authorities.

The only data that remains allowed is that which is necessary for the performance of a contract or for the implementation of pre-contractual measures with the person concerned. Communication with American customers or hotel bookings in the USA are still allowed.

Equally not directly affected is the use of US service providers if the service is provided entirely in European data centers. This is now the case with large hosting and cloud providers (e.g. Amazon Cloud) from the USA, for example, as they have server locations in Europe.

In practice, therefore, the only way forward for the time being is to use standard data protection clauses which ensure a certain degree of legal certainty. In addition, however, there is certainly still a great deal of uncertainty regarding the additional examination of the level of data protection in the country of the data recipient, which is still necessary.

It therefore remains to be seen how other data protection authorities in Germany and the EU position themselves on the question of the legally compliant use of standard contractual clauses for data transfers to the USA. A renewed attempt to establish a follow-up regulation to the Privacy Shield would be a conceivable option.

However, this agreement would have to include significant restrictions of the American security laws and an expansion of the legal protection options for EU citizens. This does not seem very promising. The USA will not change its security laws because of EU data protection concerns!

As a result, in practice, there is no choice but to await further action from the European Commission and recommendations from data protection authorities. Announcements to this effect have already been made by both the European Commission and the European Data Protection Committee (EDSA). So, unfortunately, as so often we have to wait and see…

Twitter had gone over to mark fake news and false claims in his published tweets. In doing so, Twitter wanted to make it clear that Twitter was questioning the truth of some content.

This was also done to the successful Twitterer Donald J. Trump who has more than 85.5 million followers. He directly saw this as a censorship of his expression of opinion and threatened to abolish the previous freedom from liability for illegal content on the platforms.

This action would have fatal consequences and would be a dramatic deterioration in the legal position of the Internet platforms. Because if they were actually liable for illegal content of their users themselves, a change in business models and the introduction of upload filters would lead to the fear of a significant censorship of the content.

The question if Trump is able to do this constitutionally by presidential decree is not to be discussed here.

I also found the published position of Mark Zuckerberg of Facebook on this issue extremely interesting. This becomes clear by the following statement of him: “I do not believe that Facebook and other platforms should be judges of truth!”

Jack Dorsey, CEO of Twitter, responded: “I don’t want to judge the truth, I want to enable people to form a free opinion based on facts!

And right in the middle of all this, Donald J. Trump, who strongly believes that everything he says is true and factual.

I suppose it is worth thinking about the terms: fact, opinion and truth.

In the constitutional law of Western democracies, freedom of expression and freedom of the press are traditionally established as very high legal values.

In press law, a fundamental difference is made between opinion and fact in the form that facts are in principle accessible to objective, scientific proof. In contrast, an expression of opinion is characterized precisely by the fact that it is not verifiable, but rather is the result of an individual, intellectual, subjective process that is not subject to verification.

Fortunately, this broad definition of freedom of expression is consistently represented and protected by the Federal Constitutional Court. An evaluation of opinion in terms of content is forbidden, quite in keeping with Voltaire, the pioneer of the French Revolution and civil freedom.

Freedom of expression in the public space is exactly the right to express and say what others do not want to hear!

Legally problematic now is the definition of the concept of truth. Here we leave the justiciable constitutional right range and enter the philosophically, religiously shaped world view range.

The truth is to be classified as best as:

a verifiable fact in its most convincing form.

But if we look at it this way, then it is a verifiable fact and no longer a truth.

This means that when we speak of truth, it always contains an element of subjective confidence. For subjective belief, whether ideological or religious, is a characteristic of the definition of opinion in the constitutional sense. Strangely enough, we are dealing with a concept of truth that contains elements of fact as well as elements of the concept of opinion with its subjectivity.

This reminds me in an impressive way of a quotation from Friedrich Nietzsche, who described truth as something other-as something always also bipolar. “Pain is always a pleasure, curse always a blessing, night also a sun and a wise man also a fool[…]” (Source: So said Zarathustra p. 402)

After these realizations it remains to be stated surprisingly that one must quite agree with Donald J. Trump, when he says that everything he said is true. Nevertheless, this is only his own, individual, highly personal truth.

But if he claims facts, he cannot refuse the necessity to prove them.

Again, and again we read or hear: “The ECJ has decided” or “[…] the ECJ has today in its decision strengthened the rights of consumers in the EU” or something similar.

The questions that come to mind are in such a case:

  • What legal effect does the decision have for me as a citizen of the FRG or any other member state?
  • Must all courts in the Eu-states now base their decisions on this verdict?
  • What function and legal effects do ECJ rulings actually have?

For this purpose, let us first consider the original competences of the ECJ?

  1. The ECJ is responsible within the EU for all findings of EU treaty violations by Member States.
    This means that the ECJ makes legally binding decisions on whether a state has violated EU treaties. For instance, in the case of the ancillary copyright law for press publishers introduced by the Federal Republic of Germany into its Copyright Act legislation.
    For example: In this case, the FRG violated EU treaty law with the consequence that these rules of the German Copyright Act legislation are invalid! (If a matter is to be regulated by an EU directive – as here – then a member state cannot simply make its own national regulation!)

  2. The ECJ is also responsible for the questions whether a state has violated the human rights of an EU citizen as laid down in the EU Human Rights Convention by its actions. Here, too, a judgment of the ECJ immediately leads to the ineffectiveness of the member state’s actions or its applied regulations!
    For example: The FRG had to change its custody regulations after the father of an illegitimate child, who paid alimony and insisted on his right of contact filed his case at the ECJ. According to the legal situation at that time, the mother could prohibit him the right of contact in principle. The father had lost before all German courts, including the German Constitutional Court. Or the case of the law student from Vienna named Schremp , who saw his EU human rights affected by the practice at the time – based on the Safe Harbor Agreement – of exchanging personal data between the EU and the USA. In this case, the decision of the ECJ led to the immediate invalidity of the Safe Harbor Agreement and therefore to the immediate illegality of the entire exchange of data between the USA and Europe.

But what happens now in cases where national supreme courts, such as the Federal Supreme Courts, appeal to the ECJ?

This only happens in cases where the decision of the national court has to be based on the interpretation of a standard which has originated in an EU directive. Here there is a declared political will to ensure uniform legal practice in Europe in the interpretation of EU directives. How does this happen now? Well, if the ECJ has to decide a case involving a corresponding standard, it makes a so-called referral order to the ECJ with questions on the interpretation of the standard.

This court examines the questions presented in the light of expert opinions and answers them in the form of a decision.

This is now the decision of the ECJ, of which we read in the media!

This decision goes back to the national court, which now has two options:

  • It agrees with the opinion of the ECJ and decides its case on the basis of this interpretation.


  • It does not agree with the interpretation and decides otherwise.

The result is that the ECJ is not legally binding in relation to the national courts, as the highest instance at the EU level.

The autonomy of the national courts is not affected, so that this also applies here: Only in the constellations mentioned in 1. and 2. does the ECJ have binding effects and powers in relation to the Member States and their citizens. In all other legal matters, the national autonomy and independence of the courts remains.