An important step towards increasing trust in political decisions

Since the end of 2020, there has been an agreement to introduce a mandatory transparency register for the European Parliament, the Council of the EU and the European Commission. This agreement goes back to a Proposal the Commission had already submitted in 2016.

On December 15, 2020, the Parliament, the Council and the Commission have now reached a final agreement on an Interinstitutional Agreement (IIA). The official signing and entry into force are planned for spring 2021.

Extension and new participation of the EU Council

The new Transparency Register will be managed by a secretariat in which the three institutions Parliament, Council and Commission will participate on an equal basis. To be registered, the applicants will have to comply with a code of conduct. Here, there was also a consensus to introduce stricter provisions on monitoring and investigations to ensure that effective action is also taken if a lobbyist does not comply with the code of conduct. The removal of registered lobbyists from the register is also defined as a possible sanction.

Mandatory registration of activities

The Transparency Register provides that interest representatives must register if they engage in the following activities:

  • Meeting with significant decision-makers, organizations, and
  • Participate in hearings and briefings, and
  • Seeking access to institutions.

This includes activities that aim to influence decision-making processes or formulations or implementation of policies or legislation at the EU level. Furthermore, stakeholders must explain what interests and objectives they pursue and which clients they represent, as well as providing information about resources used for interest advocacy, especially sources of funding.

Associations and networks of agencies engaged in lobbying can voluntarily register if they choose to do so.

Some activities will remain possible without registration: (for example)

  • spontaneous meetings,
  • legal advice and
  • activities of social partners, political parties, Intergovernmental Relations or Member State authorities.

Provisions for individual institutions

The European Comission: Members of the EU Commission may only meet stakeholders who are listed in the Transparency Register. Information on such meetings is published on the Europa website.

The European Parliament: Here, registration is a requirement for access to its facilities, for presentations at public hearings of parliamentary committees, or for participation in the work of intergroups or other unofficial grouping activities organized in the Parliament.

The EU Council: Again, an entry in the Transparency Register is required to gain access to its facilities, participation in thematic information and stakeholder meetings with the Secretary General and the Director General of the General Secretariat of the Council.


Perhaps this initiative on EU level here would also be an occasion for the governing parties in the Federal Republic of Germany to think about the introduction of a corresponding transparency register in Berlin.

At present lobbyists of over 500 lobby organizations can freely go in and out without any registration or transparency in the Bundestag!

Agreements and rules who when how with whom why access receives keeps the Bundestag administration as before under lock and key: a – as I find – sustainably intolerable condition! The citizens of the Federal Republic of Germany are deserving of more transparency from their government.

Excessive monitoring and performance checks of employees due to the use of products from U.S. IT corporations are increasingly leading to unlawful restrictions on employee rights and violations of applicable data protection regulations in Germany as well.

While Microsoft responded early to concerns about questionable functions in Office 365, authorities objected to Amazon’s use of certain software. Microsoft had added an additional analysis function called “Workspace Analytics” to its “Microsoft 365” software package in an update. This made it possible to calculate a productivity score for individual employees. This value includes, for example, information on how many e-mails or messenger messages the individual employees send each day or how often they save files in the Microsoft Cloud or share these data with external persons. Also technical details, such as the use of slower conventional hard disks instead of the faster SSD. Data on the length of time webcams are activated during video conferences is also recorded here.
However, Microsoft backed down and improved the update accordingly after data protectors intervened. The Productivity Score will then only be available in summarized form at company level, so that it will no longer be possible to draw direct conclusions about individual employees.
Amazon’s reaction, however, is different. The data protection commissioner of Lower Saxony has expressly prohibited Amazon from using controversial monitoring and performance control software.
With the help of the software, every scanning process that employees perform when storing or removing products is automatically transferred to the foremen’s devices and displayed there. This enables them to monitor each individual work step in real time and, for example, to recognize directly if an employee briefly interrupts his usual work rhythm. This comprehensive data is also used to create detailed employee profiles. Amazon sees no problem at all in the use of the performance monitoring software and will not accept the authority’s decision.

In my opinion, this legal opinion does not correspond to the fundamental legal templates of the GDPR. A data protection impact assessment required when using this software according to Art. 35 GDPR would certainly confirm this. After all, the necessity and proportionality of the use of this software in relation to the purpose, the risks to the rights and freedoms of the data subjects must be assessed. This software is thus tantamount to total surveillance, which certainly contradicts the fundamental idea of Article 1 of the German constitutional Law, and thus an essential aspect of the core of the fundamental right to informational self-determination.

on security and liability of the EU Commission to the European Parliament, the Council and the European Economic and Social Committee of 19.2.2020

This report was published together with the White Paper on Artificial Intelligence – a European concept for excellence and trust – by the EU Commission on 19.2.2020. This report analyses the relevant current legal framework in the EU. It examines where there are uncertainties regarding the application of this legal framework due to the specific risks posed by AI systems and other technologies. The report concludes that current product safety legislation already supports an extended approach to protect against all types of risks posed by the product depending on its use. However, in order to provide greater legal certainty, provisions could be included that explicitly address newer risks associated with the new digital technologies. In summary, the report could be said to provide an outlook on the expected legal regulations at EU level for the next few years in the field of AI systems and there in particular with regard to the associated security and liability issues. Here, the report distinguishes between two main areas of regulation, product safety regulations and questions regarding the existing liability frameworks for digital technologies.

  1. Product safety regulations
    While the report concludes that current product safety legislation already supports an expanded concept of protection against all types of risks posed by a product depending on its use, it is not clear how this is to be achieved. However, to create greater legal certainty, provisions could be included which explicitly address new risks related to the new digital technologies.
    1. The autonomous behaviors of certain AI systems during their life cycle can lead to significant security-related changes in products, which may require a new risk assessment. In addition, it may be necessary to provide for human control from the design phase onwards throughout the life cycle of AI products and systems as a protective measure.
    2. Explicit obligations for manufacturers could also be considered, where appropriate, in relation to mental safety risks to users (for example, when working with humanoid robots).
    3. EU-wide product safety legislation could include both specific requirements to address the safety risks posed by incorrect data at the design stage and mechanisms to ensure that the quality of data is maintained throughout the use of AI products and systems.
    4. The issue of opacity of algorithm-based systems – the possibility of self-directed learning and self-directed performance improvement of some AI products – could be addressed by setting transparency requirements.
    5. In the case of stand-alone software that is marketed as such or downloaded into a product after it has been marketed, existing requirements may need to be adapted and clarified if the software has safety implications.
    6. Given the increasing complexity of supply chains in new technologies, provisions making cooperation between economic operators in the supply chain and users mandatory could also contribute to legal certainty.

  2. Liability regulations
    The characteristics of new digital technologies such as AI may challenge certain aspects of existing liability frameworks and reduce their effectiveness. Some of these features may make it difficult to trace the damage back to an individual, which would be required under most national rules to make fault claims. This could significantly increase costs for the injured party and make it difficult to pursue or prove liability claims against actors other than producers.
    1. Persons who must have suffered damage because of the use of AI systems will enjoy the same level of protection as persons who have been harmed by other technologies. At the same time, there must be enough room for further development of technological innovation.
    2. All options envisaged to achieve this objective – including possible amendment of the Product Liability Directive and possible further targeted harmonization of national liability laws – should be carefully considered. For example, the Commission invites comments on whether and to what extent it might be necessary to mitigate the consequences of complexity by changing the rules on the burden of proof for damages caused by the operation of AI applications as provided for in national rules of conduct.
    3. In the light of the above comments on the liability framework, the Commission concludes that, in addition to the possible adaptation of this existing legislation, new legislation specifically targeted at AI may be necessary to adapt the EU legal framework to current and expected technological and commercial developments.

    The White Paper identifies the following areas as possible additional regulatory points:
    • A clear legal definition of AI
      A risk-based approach should be taken here, i.e. there should be AI applications with high or low risk. Here, regulatory efforts should be concentrated on those applications with high risk, so as not to cause disproportionately high costs for SMEs. Criteria for the risk class should be the question whether the AI application is used in a sector where, due to the nature of the typical activities, significant risks are to be expected. The second criterion is whether the AI application is used in a sector in which significant risks are to be expected.
    • Key features
      The requirements for high-risk AI applications can relate to the following key features: Training data, data and record retention, information to be presented, robustness and accuracy, human oversight, special requirements for certain AI applications, for example, remote biometric identification applications.
    • Addressees
      Many actors are involved in the life cycle of an AI system. These include the developer, the operator, and possibly other actors such as manufacturer, dealer, importer, service provider, professional or private user. The Commission believes that in a future legal framework, the individual obligations should be the responsibility of the actor(s) best able to manage potential risks. For example, AI developers may be best placed to manage the risks arising from the development phase, while their ability to control risks in the exploitation phase may be more limited. The Commission considers it essential that the requirements apply to all relevant economic operators offering AI-based products or services in the EU, whether they are established in the EU or not.
    • Compliance and enforcement
      Given the high risk that certain AI applications represent overall, the Commission considers at this stage that an objective ex-ante conformity assessment would be necessary to verify and ensure that certain of the above-mentioned mandatory requirements for high risk applications are met. An ex-ante conformity assessment could include procedures for testing, inspection, or certification. This could include a review of the algorithms and data sets used in the development phase.

      a) Governance
      A European governance structure for AI, in the form of a framework for cooperation between the competent national authorities, is necessary to avoid fragmentation of responsibilities, to strengthen the capacities in the Member States and to ensure that Europe gradually equips itself with the capacities needed for the testing and certification of AI-based products and services
  1. Conclusion
    Even though the considerations made by the EU Commission in the White Paper and in the report on the impact of artificial intelligence on the adaptation of the legal nationally different existing regulations regarding artificial intelligence are still at a very unspecific stage and still in the middle of the political discussion, the following can be stated
    1. With an adapted or supplementary legal regulation on EU level regarding the questions of product safety (i.e. market access requirements) as well as regarding the reorganization of liability issues in connection with AI systems, it can be assumed with some certainty that this will happen in the course of the next few years.
    2. Especially AI vendors should be prepared for the fact that the algorithm must be transparent, verifiable, and finally meet certain certification requirements. In addition, an extended liability and thus responsibility of the AI provider that goes beyond the known extent of product liability, for example with regard to responsibility for supply chains and complex products, is certainly to be expected. As a result, this will only be associated with changed, more transparent development processes and extended responsibility, i.e. considerably higher costs for the corresponding insurance cover.

On December 9, 2020, the EU Commission intends to announce a series of new planned competition and antitrust regulations to improve the control of technology groups, particularly the major Internet platforms.

In a report published on November 19, 2020, the EU Court of Auditors also urges the improvement of corresponding EU regulations. In particular, the report criticizes the fact that two antitrust proceedings against Internet platforms already exist under current law, but that enforcement here leaves much to be desired.

Although the EU Commission has opened antitrust proceedings against Google, these have been still pending before the European Court of Justice for more than three years without a decision.

Already a few weeks ago, the contents of the planned new EU regulations – the so-called Digital Services Act – were leaked, so I would like to give you a short list of the intended regulations in the following

1. Exclusive use of data

Large online platforms could be banned under the EU Digital Services Act from using collected User Data if it is not made available to smaller platforms. The activities of so-called “gatekeeper” platforms such as Google, Amazon and Facebook will be discussed in particular. These large corporations have a disproportionately high degree of economic power and control over the online world and can therefore participate in deciding “at the gate” who may enter the market.

According to the new regulation, gatekeepers are only allowed to use data

  • which is produced on the platform itself
  • or which are generated and collected on other services of the donors

for their own commercial purposes if it is also made available to other commercial users.

2. Rankings

Furthermore, online search engines are to be prohibited from displaying their own services preferentially and in an exposed position. This regulation represents a considerable tightening of the previous EU regulation from July 2019. In this regulation, search engines were only obliged to make it clear and transparent if they give preference to their own products and services.

3. Freedom of choice and Pre-installation

Equally, e-commerce giants will be prohibited in the future from restricting the ability of business users to offer the same goods and services to consumers under different conditions through other online intermediary services. It will also be prohibited for large companies to pre-install only their own apps on hardware systems. It must also be possible for consumers to uninstall applications that have already been pre-installed by the manufacturer.

4. Einführung einer sogenannten „Grauen Liste“

Furthermore, the EU Commission intends to introduce a so-called “grey list” of activities that the executive considers unfair and which may therefore require increased supervision by a competent authority in the future. According to this list, the platform giants are not allowed to prevent third parties from accessing essential information about customers and are instructed not to collect personal data beyond what is necessary to provide their services.

It ultimately remains to be seen in what concrete form the EU Commission will now present these regulations in December 2020. However, it can certainly be expected that intensive lobbying by the major Internet groups, as has already happened several times in the past, will result in some changes and public discussion before the final adoption. It remains – as so often in life – thus further exciting.

What is the reason behind this?

In a judgement, the German Federal Supreme Court confirmed the imposition of a fine by the German Federal Cartel Office against Facebook. Although the judgement of the German Federal Supreme Court was issued more than two months ago, I would like to take up the issue again here and draw attention to two problems that are made clear in this decision.


First, it should be noted that this decision will not have any legal consequences for everyday life and behaviour on and with Facebook at the moment.

This is simply due to the fact that the judgement is a summary proceeding. In these summary proceedings, the court decided on a fine imposed on Facebook by the German Federal Cartel Office in 2019.

Here is the first point of criticism: the length of time the summary proceeding lasted!

Before the fine was imposed, the Federal Cartel Office had already investigated for 3 years. This so-called summary proceeding took more than 4 years until it became legally binding!

This appears to be extremely problematic, especially in disputes related to the digital economy, because economic power in digital markets establishes itself quickly.

Facebook, on the other hand, now has the opportunity to have the decision on the main proceedings reviewed intensively once again. It is possible that the German Federal Supreme Court will also seek an opinion from the European Court of Justice in the proceedings, which would mean years of main lawsuits, without any legally binding final decision.

Here the legislator is undoubtedly called upon to ensure effective legal protection!


The second important aspect of this decision is that the German Federal Supreme Court did not adopt the substantive justification of Facebook’s dominant position – as in the original decision of the Federal Cartel Office. The German Federal Supreme Court has NOT based its decision on a violation of the German Data Protection Act as a violation of antitrust law but has instead classified Facebook’s terms and conditions as questionable under antitrust law.

In doing so, the German Federal Supreme Court avoids deciding on the question of whether a violation of the European Data Protection Act can in principle constitute a violation of antitrust law. Sooner or later, the German Federal Supreme Court will not be able to ignore a statement on this question. Because the essential point of the problem is that the European Data Protection Act does not want to protect private personal data against an excessively encroaching state, but against the economic interests of the internationally very well positioned Internet platforms! Their business model consists precisely in generating sales through the intelligent use of personal data.

In conclusion, we can say:

Since we are only at the beginning of the intensive highest court clarification of legal questions on the application and scope of the European Data Protection Act, the German Federal Supreme Court will sooner or later have to take a clear position on this issue! In essence, the question is whether the field of protection of fundamental rights, which is part of public constitutional law, also applies directly to legal relations with Internet platforms in the private sector under civil law.

In its latest decision the European Court of Justice declares the Privacy-Shield-Agreement to be ineffective. Essentially, it justifies this on the basis of US security laws, which grant the authorities extensive access to data of EU citizens without significant restrictions and without judicial control being possible.

At the same time, the European Court of Justice also decided on the standard data protection clauses by which a data importer in a third country gives a contractual assurance to a European company that data transmitted to it will be processed in accordance with EU data protection standards.

In principle, these standards should continue to apply, as long as the laws of the destination country allow the data recipient to comply with these data protection clauses. Since companies in the USA are legally obliged to make their data available to state authorities on a large scale, the European data protection authorities are obliged to suspend or prohibit the transfer of data based on these data protection clauses in such countries.

This has a major practical impact on the international exchange of data!

Data transfers to the USA are now in breach of data protection laws if they are made exclusively on the basis of a Privacy Shield certification. This covers not only transfers to contract processors, i.e. Cloud Service Providers, but also those within a group or to business partners for whom at least part of the data processing is performed in the USA.

The use of software tools where at least part of the data processing takes place in the USA as well as the internal data flows to US Group companies have to be checked.

The European Court of Justice indicates that this is not an adequate level of protection in the USA due to the uncontrolled monitoring powers of the security authorities.

The only data that remains allowed is that which is necessary for the performance of a contract or for the implementation of pre-contractual measures with the person concerned. Communication with American customers or hotel bookings in the USA are still allowed.

Equally not directly affected is the use of US service providers if the service is provided entirely in European data centers. This is now the case with large hosting and cloud providers (e.g. Amazon Cloud) from the USA, for example, as they have server locations in Europe.

In practice, therefore, the only way forward for the time being is to use standard data protection clauses which ensure a certain degree of legal certainty. In addition, however, there is certainly still a great deal of uncertainty regarding the additional examination of the level of data protection in the country of the data recipient, which is still necessary.

It therefore remains to be seen how other data protection authorities in Germany and the EU position themselves on the question of the legally compliant use of standard contractual clauses for data transfers to the USA. A renewed attempt to establish a follow-up regulation to the Privacy Shield would be a conceivable option.

However, this agreement would have to include significant restrictions of the American security laws and an expansion of the legal protection options for EU citizens. This does not seem very promising. The USA will not change its security laws because of EU data protection concerns!

As a result, in practice, there is no choice but to await further action from the European Commission and recommendations from data protection authorities. Announcements to this effect have already been made by both the European Commission and the European Data Protection Committee (EDSA). So, unfortunately, as so often we have to wait and see…

Twitter had gone over to mark fake news and false claims in his published tweets. In doing so, Twitter wanted to make it clear that Twitter was questioning the truth of some content.

This was also done to the successful Twitterer Donald J. Trump who has more than 85.5 million followers. He directly saw this as a censorship of his expression of opinion and threatened to abolish the previous freedom from liability for illegal content on the platforms.

This action would have fatal consequences and would be a dramatic deterioration in the legal position of the Internet platforms. Because if they were actually liable for illegal content of their users themselves, a change in business models and the introduction of upload filters would lead to the fear of a significant censorship of the content.

The question if Trump is able to do this constitutionally by presidential decree is not to be discussed here.

I also found the published position of Mark Zuckerberg of Facebook on this issue extremely interesting. This becomes clear by the following statement of him: “I do not believe that Facebook and other platforms should be judges of truth!”

Jack Dorsey, CEO of Twitter, responded: “I don’t want to judge the truth, I want to enable people to form a free opinion based on facts!

And right in the middle of all this, Donald J. Trump, who strongly believes that everything he says is true and factual.

I suppose it is worth thinking about the terms: fact, opinion and truth.

In the constitutional law of Western democracies, freedom of expression and freedom of the press are traditionally established as very high legal values.

In press law, a fundamental difference is made between opinion and fact in the form that facts are in principle accessible to objective, scientific proof. In contrast, an expression of opinion is characterized precisely by the fact that it is not verifiable, but rather is the result of an individual, intellectual, subjective process that is not subject to verification.

Fortunately, this broad definition of freedom of expression is consistently represented and protected by the Federal Constitutional Court. An evaluation of opinion in terms of content is forbidden, quite in keeping with Voltaire, the pioneer of the French Revolution and civil freedom.

Freedom of expression in the public space is exactly the right to express and say what others do not want to hear!

Legally problematic now is the definition of the concept of truth. Here we leave the justiciable constitutional right range and enter the philosophically, religiously shaped world view range.

The truth is to be classified as best as:

a verifiable fact in its most convincing form.

But if we look at it this way, then it is a verifiable fact and no longer a truth.

This means that when we speak of truth, it always contains an element of subjective confidence. For subjective belief, whether ideological or religious, is a characteristic of the definition of opinion in the constitutional sense. Strangely enough, we are dealing with a concept of truth that contains elements of fact as well as elements of the concept of opinion with its subjectivity.

This reminds me in an impressive way of a quotation from Friedrich Nietzsche, who described truth as something other-as something always also bipolar. “Pain is always a pleasure, curse always a blessing, night also a sun and a wise man also a fool[…]” (Source: So said Zarathustra p. 402)

After these realizations it remains to be stated surprisingly that one must quite agree with Donald J. Trump, when he says that everything he said is true. Nevertheless, this is only his own, individual, highly personal truth.

But if he claims facts, he cannot refuse the necessity to prove them.

Again, and again we read or hear: “The ECJ has decided” or “[…] the ECJ has today in its decision strengthened the rights of consumers in the EU” or something similar.

The questions that come to mind are in such a case:

  • What legal effect does the decision have for me as a citizen of the FRG or any other member state?
  • Must all courts in the Eu-states now base their decisions on this verdict?
  • What function and legal effects do ECJ rulings actually have?

For this purpose, let us first consider the original competences of the ECJ?

  1. The ECJ is responsible within the EU for all findings of EU treaty violations by Member States.
    This means that the ECJ makes legally binding decisions on whether a state has violated EU treaties. For instance, in the case of the ancillary copyright law for press publishers introduced by the Federal Republic of Germany into its Copyright Act legislation.
    For example: In this case, the FRG violated EU treaty law with the consequence that these rules of the German Copyright Act legislation are invalid! (If a matter is to be regulated by an EU directive – as here – then a member state cannot simply make its own national regulation!)

  2. The ECJ is also responsible for the questions whether a state has violated the human rights of an EU citizen as laid down in the EU Human Rights Convention by its actions. Here, too, a judgment of the ECJ immediately leads to the ineffectiveness of the member state’s actions or its applied regulations!
    For example: The FRG had to change its custody regulations after the father of an illegitimate child, who paid alimony and insisted on his right of contact filed his case at the ECJ. According to the legal situation at that time, the mother could prohibit him the right of contact in principle. The father had lost before all German courts, including the German Constitutional Court. Or the case of the law student from Vienna named Schremp , who saw his EU human rights affected by the practice at the time – based on the Safe Harbor Agreement – of exchanging personal data between the EU and the USA. In this case, the decision of the ECJ led to the immediate invalidity of the Safe Harbor Agreement and therefore to the immediate illegality of the entire exchange of data between the USA and Europe.

But what happens now in cases where national supreme courts, such as the Federal Supreme Courts, appeal to the ECJ?

This only happens in cases where the decision of the national court has to be based on the interpretation of a standard which has originated in an EU directive. Here there is a declared political will to ensure uniform legal practice in Europe in the interpretation of EU directives. How does this happen now? Well, if the ECJ has to decide a case involving a corresponding standard, it makes a so-called referral order to the ECJ with questions on the interpretation of the standard.

This court examines the questions presented in the light of expert opinions and answers them in the form of a decision.

This is now the decision of the ECJ, of which we read in the media!

This decision goes back to the national court, which now has two options:

  • It agrees with the opinion of the ECJ and decides its case on the basis of this interpretation.


  • It does not agree with the interpretation and decides otherwise.

The result is that the ECJ is not legally binding in relation to the national courts, as the highest instance at the EU level.

The autonomy of the national courts is not affected, so that this also applies here: Only in the constellations mentioned in 1. and 2. does the ECJ have binding effects and powers in relation to the Member States and their citizens. In all other legal matters, the national autonomy and independence of the courts remains.

In my seminars on copyright law I have come across this question again and again in the recent months. The occasion was the discussion about the new EU Copyright Directive, which was decided on in the summer of 2019 with a lot of excitement in public.

Keywords were upload – filter and direct liability of platform operators for copyright violations by their users.

The Main focus of the directive is the Europe-wide introduction of ancillary copyright for press publishers and the establishment of direct liability of the platform operators. Both are already applicable law in the FRG.

But what legal binding effect does an EU directive have?

It is important to know that an EU directive comes into being in the so-called TRILOG procedure at EU level, i.e. with the participation of the three institutions of the EU:

  1. The EU Council, i.e. the assembly of the heads of governments of the EU Member States. This also convenes at ministerial level. The EU Council is the political leadership of the EU and the principle of unanimity applies. This is also the reason why the EU cannot take any political decisions currently, because I almost no political question unanimity can be reached.
  2. The EU Parliament, which is larger than any national parliament and directly elected.
  3. The EU Commission as the administrative arm of the EU, which is basically responsible for ensuring uniform economic conditions in the EU and has been given extensive powers in this regard.

So how does the TRILOG procedure typically work?

The EU Council decides that in one area, e.g. data retention, it makes sense to install a uniform Europe-wide regulation. The Council then instructs the EU Commission to prepare a corresponding directive.

The EU Commission then develops the proposal for an EU directive with the participation and consultation of the associations and lobbyists concerned and then submits it to the EU Parliament for the first vote.

In the EU Parliament, this blueprint is then supplemented, amended and expanded and, after the conclusion of the parliamentary discussion, is put to the first vote.

This version is then submitted to the EU Council for final examination. The Council can make deletions and amendments and must then vote again.

The version modified in this way is then put back to the EU Parliament for the 2nd vote. The EU Parliament must then vote, without being able to change the content of the directive in any way.

If this vote is positive, then it is finally there the final EU directive!

What does that mean for us now?

According to the EU Agreements, this directive is now not directly applicable in the Member States as binding law, but merely triggers the obligation of the Member States to incorporate this directive into their respective national law.

This means that each state must now start its national legislative procedure and incorporate the directive into its national law, e.g. the Copyright Act. Unfortunately, there is no guarantee that this will actually be implemented! The EU can only impose penalties if a state does not implement the directive within a period of 3 – 5 years.

No EU head of state risks political problems “at home” just to implement an EU directive. It should not surprise us that especially the FRG does not implement all EU – directives into national law!

So, in the result you have to state that an EU – directive will never be directly legally binding for a citizen of a member state!

Disclaimer: All of the following Statements are based on the German Law.

Over and over again you can see in job advertisements or during the preparation for a job interview that the company advertising the job points out that they unfortunately cannot cover the costs for the job interview. This raises the question: How do I as an applicant deal with this situation? There is hardly any other legal question on which the German courts are as unanimous as on this question.

According to § 670 BGB the inviting company has to cover the costs!


Well, of course § 670 BGB does not say anything directly about the costs of introduction but § 670 BGB says:

If, for the purpose of executing an order for the principal, which is solely in the interest of the principal and the agent hasn´t been explicitly ordered to do so , the agent incurs expenses which he may consider necessary under the circumstances, the principal  is obliged to pay compensation.

This means that if someone does something for someone else, without a specific order, he must be refunded the expenses made. The only condition is that the transaction is carried out exclusively or at least predominantly in the interest of the other (the principal) and that the person carrying out the transaction has not acted only in his own interest.

Here is a typical example for §670 BGB: Your neighbour goes to the Caribbean for diving and is also not reachable by telephone. Then a pipe burst occurs in his apartment. As a responsible neighbour you take care and hire a plumber and you also have other expenses. As you are the contractual partner of the plumber, you also have to pay him. But don’t worry according to § 670 BGB you only have to wait until your neighbour is back from his holiday safely and then he has to give you a refund of all costs. Regardless of whether he wants to or not. These expenses are only in his interest and not in your own.

Now back to the topic. This is exactly how the courts assess the interests at a job interview. The courts are of the opinion that it is in the exclusive or at least predominant interest of the inviting company to improve its business processes with qualified employees and therefore ultimately increase its profit.
But Caution: If I am informed before the journey to the interview that the costs will not be covered and I go to the interview anyway as an applicant, then this is a legally valid waiver of my claim under § 670BGB.


  1. the question of costs should never be asked by the applicant during the preparation for the interview, but simply lay out the costs and drive to the interview.
  2. only when the final refusal comes, these costs are then claimed.
  3. claims arising from § 670 BGB will only become statute-barred after 3 years. So, time enough!

Now my personal point of view: I can’t understand why companies with high staff turnover or filling simple mini-jobs are charging applicants with these costs for an interview. For companies these costs are a marginal expense. However, these costs are a heavy burden for the individual applicant who is in an economic emergency situation, as he may need a new job or want to develop further. He may have to go through several interviews and these costs always represent a much higher proportion of his available income than for the inviting company.