I suppose I ask what possible security holes should I look for?
Let me give you a five-minute overview of the "traditional" protection of .NET code. (We have a newer, simplified security model that should be used for new code, but it is useful to understand the basic security model.)
The idea is that meetings provide evidence - things like where they are located, who wrote them, and so on. The policy uses evidence and creates a set of permissions associated with this assembly.
When an action is taken that requires a specific permission — for example, creating a dialog box or accessing the printer or writing to a file, the runtime prompts for that permission. The request checks the code currently "on the stack" to determine all the code that called the real code, directly or indirectly. (*)
The request states that each subscriber in the stack must be granted the required permission. This prevents an intrusive attack in which a hostile code with low trust causes a benign code with high trust and “attracts” him to perform some dangerous operation on his behalf to harm the user. Since a full request checks direct and indirect subscribers, the lure attack wins.
The statement allows high-yield code to change the semantics of demand. The statement says: "I am a friendly high trust code, and I affirm that I cannot lure an enemy caller with low confidence into performing a dangerous operation on his behalf." Approval is usually associated with weaker demand; that is, the code with high confidence states: “I can safely call unmanaged code, even if the caller cannot,” and then requires it, “but the caller had more access to the printer, because that’s what I’m going to do with my unmanaged code. "
The problem with the requirements is that they are expensive. You should take a full walk on the glass and see all the permissions. If the operation is cheap - say, setting a pixel in a bitmap - you don’t want to do full demand every time because you spend redundant security checks all the time.
Therefore, communication requires. The connection requirement is fulfilled once for each caller of the secured method, the first time you use the code that invokes the secured method, and it only checks the direct caller of the secured method, and does not execute the full stack. After that, operations with the code request are performed without checking the security of this caller. (It really should be called “jit demand,” not “link request,” because the mechanism used is that the request is checked when the caller calls.)
Obviously, this is cheaper - one check for each caller that looks at only one assembly, cheaper than one check for a call that looks at each assembly on the stack, and more dangerous.
The need for links is largely phased out. The link request says: “Caller, by sending me a link verification request, you can call me cheaply from now on. But now I turn off the security system, and so now you are responsible for ensuring that your caller cannot successfully attack using this fact that I give you the right to call me without security checks in the future. "
You are calling a method with a reference. Thus, you are faced with the question: are you ready to assume this responsibility? . You can call this method cheap. Do you want to guarantee that an unsafe calling with low trust can use the fact that you get a call to this method without security checks to harm the user?
If you do not want or cannot make this guarantee, then issue your request for permission to connect; which will then require all your subscribers to meet the requirements. Or, pass the dollar to your caller: issue a connection request to your subscriber and do the work for him.
(*) Since I am fond of pointing, the call stack does not actually tell you who called you, it tells you where the management will go. Since this is usually the same thing, everything works fine. Is it possible to find yourself in situations where "who called you?" broke away from "where will you be next?"; in these environments, you should be very careful when using traditional-style passcode protection. The newer "safer" security model is better suited to these scenarios.