If you run the following in bash (while in a directory managed by git), you will see each color and the corresponding code for it:
for n in $(seq 1 255); do git log -1 --pretty=format:"%C($n)color $n"; done
You can put those number codes in the %<specifier>(<code>) as shown in the snippet above and customize your log output
1. Auto-play a video on page load
2. Switch between multiple videos using buttons
I managed to get the context menu open by right-clicking on the "Paste" button.
very good ,To keep custom functions organized and prevent them from cluttering the Global Environment pane in RStudio, consider creating an R package or sourcing them from a script. This approach allows you to manage and hide small functions used by other functions, similar to how a package works.
If you are not dead set on heroku, you can try instantsite.dev. deployment there is very intuitive and low friction. it's limited to only static sites but it's super easy to use.
I was able to do so after restarting services. No need for manager.xml file update.
Prometheus does have a "remote write" API. Its intended for use by Prometheus itself to send metrics to a remote database and is not generally considered good practice for publishing metrics to a Prometheus server, but it is available if it has been enabled with a command line option when the server is started.
See https://prometheus.io/docs/prometheus/latest/querying/api/#remote-write-receiver and https://prometheus.io/docs/specs/prw/remote_write_spec/
There is a prometheus-remote-writer Python library at https://pypi.org/project/prometheus-remote-writer/
if you are getting values from AppConfig file those are not exists
<appSettings>
or if you are using key in two type
<add key="abc" value="abc"/>
<add key="xyz" value="xyz"></add>
I finally re-wrote a StreamingResponse
class which takes the file paths to be read and generate the .tar.gz file while sending chunks to the network.
This way, I had to read deeply the tarfile.TarFile
and tarfile._Stream
and rewrite it.
The whole code is available at this gist: https://gist.github.com/etienne-monier/a608f7174ea808e3f8ac4e714156f3b8
All we need here is patch-package.
I don't think any info needs to be added as everything is explained here:
https://www.npmjs.com/package/patch-package
I ultimately gave up and performed a complete reset of the MacBook. It fixed this problem and also a few more, actually.
Well forsolve this problem and save all the points we can create some variables like this:
To keep Nan and ...
Create keep_default =False
na =[' ']
filna(' ')
import pandas as pd
df = pd.read_csv('test.csv',header=None,dtype=str,keep_default=False,na=[' ']).fillna('')
I hope your problem get OK.
apply provides limited access to wifi details for praivacy reasons,but you can check the ssid using CNcopycurrentnetworkinfo,available only under specific conditions
you need to use the UTF-16 hex code, remove the 0x on both strings, and integrate both strings as shown below.
that's how I write the emoji handshake in a cell (the UTF-16 hex code of that emoji is 0xd83e 0xdd12)
Cells(LineMeeting, ColLock).Value = ChrW("&HD83E") & ChrW("&HDD12")
If you're seeing too much vertical margin, it means your target height is larger than necessary relative to the width, based on the image's aspect ratio.
Instead of hardcoding both width and height, only set the width, and compute the height dynamically.
This way, the image will fill the target area without leaving vertical margins because the aspect ratio is preserved without oversizing.
If you want the image to completely fill the target width and height without any margins, and you're OK with cropping.
This will fill the container and cut off overflow (useful for thumbnails, banners, etc.).
You're facing Excel Interop limitations. Excel needs a logged-in user session. Use OpenXML or a server-safe export library instead.
I encountered a similar issue where a <div>
with a custom class appears hidden in Chrome due to a display: none !important
user agent stylesheet. Interestingly, this doesn't affect other browsers like Edge, and my colleagues are not experiencing the same problem.
The Express res.send()
method only accepts one argument for the response body, unlike console.log which can take multiple.
Correct it to below line:
app.get('/', (req, res) => res.send('Hello from ' + require("os").hostname()))
Please ignore this this is resolved now.
Scenario: Call API with retry
* configure retry = { count: 3, interval: 1000 }
* retry until responseStatus == 200
Given url 'https://your-api-endpoint'
When method get
Then status 200
This video saved me. Start at 6:00 for setup. Add command as npm, and arguments as run server:dev as shown at 12:40.
As per enviornement variables you can utilize https://pub.dev/packages/flutter_dotenv the package and set your env. there and load them as per your app hiting prod or stage etc.
Or you may go through another solution in the image attached.
If your system as AWS Amplify as your cloud you may use the package to fetch the variables as well: https://pub.dev/packages/amplify_secure_storage
I hope one of the solution helps you.
EDITED: But all in all, the keys are never 100% protected on client side. There's always a missing point.
I can't add comments near the lines in the PR - The
+
button is just greyed out.
Probably "Editor Commenting" is disabled.
Go to the command palette and toggle editor commenting.
I had this problem recently - my issue was that the COPY . .
command was accidentally overwriting dependencies by copying over the .venv
folder.
The solution was to add .venv
to a .dockerignore
file so it doesn't get copied.
Firstly, try to set the environment variable path correctly, like install it in %userprofile%/flut(🪟 + r)
) folder and set the path, in systems environment variable. And then, restart the computer and check the flutter version in cmd or any other command line tools for checking the versions and installation is done or not
using command flutter --version and for dart installation do the same process, and check the version using dart --version using cmd or any other tool.
Then, try to run flutter doctor, to check any other misconfigurations.
Then, check for flutter doctor --licenses for futher configurations left with and check the SDK manager for remaining requirements as per the project you are building.
Try, upgrading with command flutter pub get to check the versions compatibillities and flutter pub upgrade to get the missing requirements.
Now, try to run flutter run filepath where you had your file for the application.
I hope, this will give your desired results and if not, you can connect and ask you questions, here only.
TL/DR: It is neither.
The domain or subdomain of your microservice is checkout management : is the order valid? are you allowed to checkout? etc...
Authenticating to your partner API is not the purpose of your microservice, it is an implementation details of how you store your state. That is because your service acts as an anticorruption layer above your partner's. You store your state in their service, and in order to do that, you need a token. This is the how, not the why. This can be demonstrated by the fact that, if your partner decides to change their API and switch to cookie based, or opaque tokens, or even a password sent to you by carrier pigeon, you'd have to change your code without changing the behavior of your service or application in the eyes of your users.
Authentication is not a part of your domain layer but of the infrastructure layer.
Or more acurately from a DDD perspective, it goes in a non-domain unit.
TL/DR: probably not.
I suggest you read the excellent article Introducing Assemblage - a microservice architecture definition process by Chris Richardson about the generics of "should these things go in the same or separate service".
Reasons why you'd want to have it in a separate service would include :
Other than that, you'd probably benefit from keeping the acquisition and storage of the token in your checkout service, at least at process-level. If you need to share that feature with other services and you are concerned about copy/pasting the code, you should rather look into making that code as a sharable library which can be embedded in each microservice (though that only works if these services have the same tech stack).
If you are concerned with the fact that scaling services will over-query the token endpoint on your partner, then you should better question how your services handle that token in the first place. Why query a new token every 6 minutes when a JWT has an explicit lifetime that probably exceeds that duration? Why query tokens continuously instead of requesting one on-demand?
If the problem is sharing tokens amongst services instances, then you'd probably look into a distributed cache. Setup a resilient KV store, such as redis, and store your token there. When a service needs access to the partner service, you check the redis store. If there is no token, our it has expired, you get a new one and feeds the cache. If there is a valid token, you retrieve it and use it to call the partner's api. This will save on your development manpower.
Yeah It's work fine perfectly...
This feature would be called something like Flatten Directories
in the IntelliJ vocabulary, but that doesn't seem to exist. We only have Flatten Modules
and Flatten Packages
. I found this issue, and one of the comments mentions you might try the Packages
view, but unsure if that is exactly what you are looking for..
I ended up just redefining everything like so:
\chapter*{Appendix}\label{chap:Appendix}
\addcontentsline{toc}{chapter}{Appendix}
\renewcommand{\thesection}{A\arabic{section}}
\renewcommand{\thetable}{A.\arabic{table}}
\renewcommand{\thefigure}{A.\arabic{figure}}
\pagestyle{plain}
..which gives the (mostly) intended result.
It is better to create two threads for this purpose, use block read mode for each uart, and write data to other uart once one thread get data from one uart, vice versa.
check the current db i.e db it will return the currently selected db , if you are not on the correct db ,switch to it
check if the collection even exists
verify data in present if it returns 0 the collections is empty
That error message indicates that either the named assembly or one of it's references cannot be found. Presumably as the assembly is in the same folder as the exe it is one of the assemblies referenced by it.
Unfortunately the exception message doesn't tell you what assembly it can't find, so you must use the Assembly Binding Log Viewer (Fuslogvw.exe) to find out.
For a healthcare application, a reliable Big Data platform needs to handle vast amounts of patient data, manage real-time analytics, and ensure secure storage. Platforms like Apache Hadoop for batch processing and Apache Spark for real-time analysis work well. We’re also building similar solutions that integrate seamlessly with healthcare software to ensure data accuracy and efficient insights, all while maintaining high security and compliance standards. We focus on creating user-friendly, scalable systems that enhance healthcare outcomes.
What you're currently doing is called an 'Rp-initiated logout', see the spec: https://openid.net/specs/openid-connect-rpinitiated-1_0.html
This is where the Relying Party (your client) tries to log the user out on the OpenID Provider (OP, Microsoft in this case)
Such a logout must be done via redirecting the user to the OP's logout endpoint, where the user SHOULD be asked for confirmation on whether he really wants to be logged out. There's no way to do this silently since the user might disagree with being logged out.
Some OPs might offer additional ways to terminate sessions not covered by the specification. For example, if you used Keycloak as an OP, it provides a separate REST API that allows terminating a session with a DELETE request. There might also be specific admin panels, UIs, etc. to do this. However, this depends on your specific Identity Provider. I haven't been able to find any such API endpoint for Microsoft.
You might get confused by the mention of 'backchannel logouts' when searching information about this topic. However, a Backchannel logout is when the session is already terminated on the OP through whatever means and the OP then informs the RPs (the clients) to terminate the session via a backchannel, not the other way around.
us-central2-b is not indicated as available on the documentation of available machine families by zones
SELECT a.id AS booking1,
b.id AS booking2,
a.start_date, a.end_date,
b.start_date, b.end_date
FROM bookings a
JOIN bookings b
ON a.id < b.id
AND a.start_date <= b.end_date
AND a.end_date >= b.start_date;
Without this option the code works:
options.add_argument("--disable-accelerated-2d-canvas")
You could try to replace it with:
options.add_argument("--disable-accelerated-2d-canvas")
Prevents fallback to software rendering when GPU is disabled—useful in combination with --disable-gpu
.
I have spent thinking a lot about this question myself and finally reached to a conclusion.
In dijkstra we make a visited array which marks the node as visited when it is encountered in the priority queue. This children of this node is processed first and this node won't be processed again assuming that we have found out the shortest path to this node. However, if we are working on a directed acyclic graph with negative edge weights, it is possible that we might find a shorter path for this node and then we will have to change the minimum distance in the distance array for this node as well as all the nodes in the graph which were initially encountered after this node.
Therefore, if we wish to work on a directed acyclic graph with negative edge weights, we can use dijkstra algorithm but we will have to avoid the usage of a visited array and check each possible path to a specific node. This will also increase the (originally O[Edges* log(Vetices) ] time complexity of the dijkstra algorithm as now each node will be processed more than once, in fact many times.
Check image for better understanding.
Please upvote if you agree, also suggest any issues with my approach.
-Ansh Sethi (IITK)
That's standard percent URL encoding, in this case of UTF-8 encoded text. A URL cannot contain non-ASCII characters (actually, a subset thereof, different subsets for different parts of the URL). You cannot actually have "이윤희" in a URL. To embed arbitrary characters, you can percent encode them. This simply takes a single byte and encodes its hex value as %xx. The UTF-8 byte representation of "이윤희" is EC 9D B4 EC 9C A4 ED 9D AC, which is exactly what you're seeing in the URL.
We faced a similar challenge while exporting scheduling data from Snowflake for internal reporting at MetroWest Car Services. What worked for us was scripting separate operations per table and automating them through a task. This post helped us refine the approach. Thanks for sharing!
Start your career with fresher data analyst jobs in Bangalore through 360DigiTMG. Their industry-relevant training covers SQL, Python, Excel, and data visualization tools like Tableau and Power BI. With hands-on projects and dedicated placement support, 360DigiTMG helps you secure top job opportunities and build a strong foundation in data analytics.
slaps forehead
At some point in frustration that there Wasn't A Space, I put about a dozen spaces in my text, just to see if maybe it -was- putting in spaces but something was resizing the text or, I dunno, just try to figure out what's going on.
THEN I set white-space: pre-wrap;
. Which fixed my overall problem, but also put all dozen of my spaces in the text, which is why it looked like it was doing a Whole New Wrong Thing. It wasn't. As always, the code was doing EXACTLY what I told it to do.
Those are a lot of minutes I won't get back...
You could consider using a another table component to maintain alignment and enable a scrollbar.
Please check the demo for reference.
To resolve the issue, add the following code to the /bootstrap/cache/packages.php
file:
'laravel/sanctum' => array( 'providers' => array( 0 => 'Laravel\\Sanctum\\SanctumServiceProvider', ), )
For anyone who already configured
{
"emitDecoratorMetadata": true,
"experimentalDecorators": true
}
in tsconfig.json
but still run into NoExplicitTypeError
, you might be using tsx
which doesn't support emitDecoratorMetadata
: tsx compiler limitations
Attached is the summary from the Sales Session conducted on July 23. Kindly review when convenient.
give me css in html with seperate every alphabet format And use only Black color for this
also you coul parse get params from url by URLSearchParams
myIframeRequest.html?param=val
//inside iframe
const urlParams = new URLSearchParams(window.location.search);
console.log(urlParams.getAll('param'));
Easy Steps to Convert CodeIgniter MVC to HMVC. Inside the application folder, you’ll need two subfolders: core & third_party Google drive link for the files needed in core & third_party The controller should be
extends MY_Controller
Use Javascript to sniff the user agent string.
If the string contains "MSIE" it's an older version of IE, and if it contains "Trident" it's a newer version. This page has a list of the user agent strings for various versions of IE on various operating system versions.
When the Javascript runs, if IE is not detected then add a class to the body element. In your CSS, make all your style rules dependent on that class being present.
What's the benefit of doing this though? How many of your website visitors are browsing with Internet Explorer?
It turns out that Jetbrains AI Assistant guidelines/rules can be set via the Prompt Library
.
Goto Webstorm Settings
-> Tools
-> AI Assistant
-> Prompt Library
to provide instructions and guidelines for specific AI Assistant actions.
for Azure AKS, you need:
kubelogin
from https://github.com/Azure/kubelogin, the official Microsoft-supported binary for AKS AAD login
In new Intellij versions you can do it in the settings: see screenshot: settings - coverage
expect class LocationService {
suspend fun getUserLocation(): UserLocation?
}
data class UserLocation(val latitude: Double, val longitude: Double)
There is an example with @MockitoSpyBean annotation
@SpringBootTest()
class TestExample {
@MockitoSpyBean
VersionRepository versionRepository;
@Autowired
VersionService versionService;
@Test
void findAllVisibleAndReadableVersions() {
when(versionRepository.findAll()).thenReturn(buildVersions());
versionService.findAll(); // inside versionService.findAll() call versionRepository.findAll()
// add assertations here
}
// Return versions instead of versionRepository.findAll() call
private static List<Version> buildVersions() {
return List.of( // build your versions here );
}
}
Thanks for answer my question, was a simple error, but is how increment the visits, $counter = $record[0]['visits']+1; , I give by resolved.
you can't use a wildcard in this way, the 'd' tag must match exactly in order for the email to verify. Each sub domain must have its own DKIM record.
See https://datatracker.ietf.org/doc/html/rfc6376#section-3.5 for more details on the header field.
You can use one selector by using CNAME delegation eg:
s1._domainkey.a.foo.com. IN CNAME s1._domainkey.foo.com.
To any one facing this issue please cross check there don't exist any folder named api in root directory to me this was the issue once renamed it to endpoints it just worked perfectly fine.
Use requestAnimationFrame()
for smooth animations by syncing with the browser's refresh rate. Avoid heavy tasks inside the callback and only call it when needed.
sequence of releasing COM Objects also matters,
in case of Excel [Excel.Workbook, Excel.Worksheet, Excel.Application]
Sequence:
Excel.Worksheet
Excel.Workbook
Excel.Application
answer that HelloPeople gave is working for me,
but If you have big project with so many sub projects than you can also do
cmd + shift + P
and Java: Clean Java Language Server Workspace
than restart Vs Code, and also make sure you have java extensions installed
It may be due to the @Id in entity is not correct. The repo will automatically cache the data with the @Id. To solve this issue. You may consider using @IdClass to address this composite key issue.
Based on the described scenario, one effective approach is to add a created_at
field to the User model
to track account creation time. Upon user registration, a verification email should be sent, and the account should remain inactive until the email is verified. If the user does not verify their email within a defined time window (e.g., 1 hour), the unverified account can be automatically deleted. This allows the same email address to be used again for registration after the expiration period, ensuring legitimate users are not blocked by inactive, unverified accounts.
As I wait for human answers, I post as a "first approximation" ;-) the answer of GPT-4.1 nano:
Great question! You're essentially considering batching multiple animations to reduce the number of `requestAnimationFrame` calls, which is a common performance optimization technique.
Key points to consider:
1. Performance of `requestAnimationFrame` :
2. Batching Animations:
3. Trade-offs:
4. Browser Efficiency:
---
### My recommendation (based on experience and best practices):
- If performance is critical (e.g., hundreds of concurrent animations):
Implement a batching system that consolidates multiple move operations into a single animation frame callback. This is a proven optimization strategy, especially if each animation is relatively simple.
- If simplicity and maintainability are more important:
Your current per-element approach is straightforward and easier to manage. Modern browsers handle this well for a moderate number of animations.
---
### Summary:
- Yes, batching `requestAnimationFrame` calls can improve performance when many animations run simultaneously.
- You don't need to test across browsers for the basic principle—most modern browsers are quite efficient with `requestAnimationFrame`.
- Implementing a batching system involves extra logic but can be worthwhile if performance issues arise.
So to answer my own question....
I created a dummy database on a P1 Pricing Tier and I created a table with MEMORY_OPTIMIZED = ON
An attempt to scale the database to a Standard Pricing Tier failed with an error:
**Failed to scale from Premium P1: 125 DTUs, 250 GB storage, zone redundant disabled to Standard S3: 100 DTUs, 250 GB storage for database: MOTest. Error code: undefined. Error message: The database cannot proceed with pricing-tier update as it has memory-optimized objects. Please drop such objects and try again.**
So that answers my initial question: **You must not do this if you're planning to scale the database to a Standard pricing tier.**
As a secondary observation I note that the message also states: Please drop such objects and try again. I was concerned about that, too (though I failed to mention it in my original question). On a standard SQL database an attempt to create a memory optimized table leads to an message "To create memory optimized tables, the database must have a MEMORY_OPTIMIZED_FILEGROUP that is online and has at least one container."
I was worried that in an Azure SQL database it would do something like that "behind the scenes", and that that alone would suffice to block future attempts to scale back to a Standard pricing tier. That fear turned out to be unfounded. Once I deleted the Memory Optimized table I could scale the database back to a Standard pricing tier again.
builder.AddSqlServerDbContext<TicketContext>(
"sqldata",
options =>
{
options.UseQueryTrackingBehavior(QueryTrackingBehavior.NoTracking);
});
Can you try this?
Would have preferred to add a few comments first, but I don't have the rep.
I tried the example you provided in a quick stub MsTest project, and I was unable to replicate the issue.
However, I'm guessing that there is more than one test method in the class that you're testing? (This is the bit I wanted to comment about to get more info)
I'd want to know the above since, if that's the case, and your project is setup run tests in parallel, I'm wondering if there could be an issue with the creation of the JsonSerializerOptions
since once the options are used it will have issues with adding another converter.
It might be worth trying to have the options as a field that you set during initialization as follows
using System.Text.Json;
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace TestJsonConvert;
[TestClass]
public class ObjectToInferredTypesConverterTests
{
private JsonSerializerOptions _options;
[TestInitialize]
public void Initialize()
{
_options = new JsonSerializerOptions();
_options.Converters.Add(new ObjectToInferredTypesConverter());
}
[TestMethod]
public void Read_TrueBoolean_ReturnsTrue()
{
string json = "true";
var result = JsonSerializer.Deserialize<object>(json, _options);
Assert.IsInstanceOfType(result, typeof(bool));
Assert.IsTrue((bool)result);
}
}
My csproj for the test is
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net472</TargetFramework>
<LangVersion>10</LangVersion>
<IsPackable>false</IsPackable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="coverlet.collector" Version="6.0.0"/>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.8.0"/>
<PackageReference Include="MSTest.TestAdapter" Version="3.1.1"/>
<PackageReference Include="MSTest.TestFramework" Version="3.1.1"/>
<PackageReference Include="System.Text.Json" Version="9.0.7" />
</ItemGroup>
</Project>
In the hopes that it can help with understanding/reproducing the results I received
Try this:
file.remove(".RData")
and then don't save the workspace upon exiting the R session.
Share please the appsettings configuration file, if the connection string is sensitive, replace it with a placeholder.
# your job: get geometry expressed in pixel coordinates first
xmin, ymin, xmax, ymax = shapely.total_bounds(geometry)
outshape = (int(round(xmax-xmin)),int(round(ymax-ymin)))
array = rasterio.features.rasterize(
[geometry], out_shape=out_shape, dtype=int)
Core data doesn't have a built-in way to remove duplicates automatically. The best practical approach is to fetch all objects for that entity. Keep tracks of the identifiers you've already seen and delete any objects with duplicate identifiers as you find them. This way, you avoid nested loops and unnecessary filtering.
If duplicates happen often it's a good idea to add a uniqueness constraint on the identifier property in your data model to prevent duplicates from being saved in the first place.
please show me your code. I would help you.
Joe's answer is smart, but for some reason I couldn't use that. Instead, I did this and the warnings disappeared from the Apache error log.
At the very end of wp-config.php
, add:
error_reporting(E_ALL & ~E_WARNING & ~E_NOTICE & ~E_DEPRECATED & ~E_STRICT);
Tested on: FPM 8.0 / WP 6.8.2
BE WARNED!!! The text with @Jorgeblom is NOT "encrypted", but only "masked" due to the same pattern. Please do not be fooled by knotted script wordings and schematic shorthand notations! This does not necessarily make the algo more complex, nor make it more effective or faster - but quite the opposite: Separately declared variable-functions require unnecessary resources and additional management overhead and are often only intended to impress the layman - ridiculous!
If you break down the "arrow notation" into its components, you will find chained "split" and "merge" actions over several nested arrays, which also lead to unnecessary costs and delays in the program flow due to their constant rearrangements. At the end the core algorithm for encryption (that's the main topic here) only consists of a primitive XOR operation - nothing more!
The monstrous fuss can be reconstructed with two simple "for-next" loops, as you can see below - and you'll recognize all the nonsense immediately:
function crypt(salt, text) {
// First, we assign a virgin output string:
var output = "";
// Then, we walk letter-wise through the given original text:
for(var i = 0; i < text.length; i++){
var val = text.charCodeAt(i);
// Now we manipulate each letter value with a sum of "salt" XOR's.
// This is absolutely stupid, because it repeats for every main-loop
// the same sub-loop. So it gives each time also an identical XOR-sum
// and retards the entire iteration extremely - just for nothing:
for(var k = 0; k < salt.length; k++){
val = val^(salt.charCodeAt(k));
}
// Append each XOR-result as 2-digit HEX-value to the output string:
output += (val < 16 ? '0' : "") + val.toString(16);
}
// Return that very poor encrypted (let's say "masked") output:
return output;
}
... surprisingly poor, isn't it? The second (nested) "for-next" loop is pointless because it always returns the same pattern and therefore always codes the same characters in the same way. This is why we can only speak of "masking" instead of "encrypting". A simple pattern comparison of the ciphers reveals frequently occurring identical letters on the spot.
As the matching XOR value does not change anyway during the entire sequence, the task can be solved very conveniently with my following, little helper function:
function hackThis(crypt) {
// Try the max. amount of 256 XOR values (0 - 255) to decode the cipher.
// The result are 256 text blocks and you have to check each (256 items are
// manageable). I ASSURE: One of them is READABLE AND CORRECT DECRYPTED!!!
for(var Xval = 0; Xval < 256; Xval++){
// Convert each to an ASCI-Char. to be digestible for the "decrypt" algo:
var Xstr = String.fromCharCode(Xval);
// Call the decrypt algo with the questionable character as a "salt":
var output = decrypt(Xstr, crypt);
// List the test result on the screen:
document.writeln('Nr.#'+('00'+Xval).slice(-3)+': ' + output + '<br>');
}
}
... so easy! This "hack" works with any previously used password (no matter how complex), and is the doubt free evidence of the uselessness of the crypt algo described above.
However - please let me being constructive:
First, I use the code of each already encrypted character from the original text as an encryption basis for the next followed character. This method (see below) creates a "self-encryption" that is hard to evaluate from the outside.
The initial encryption is performed by using a "pseudo hash" of the password. I call it "pseudo" because only values from 0 to 255 can be considered due to the ASCII compatibility of the XOR results. This limitation causes another problem, which is addressed in the 2nd example further below. For a start, please take a look at my first example "selfCryptBasics":
function selfCryptBasics(orgtext, passphrase) {
// First, assign some initial variables:
var value, pseudoHash = 0, preVal = 0, orgLen = orgtext.length, output = "";
// Force the "string" type of a possible numeric passphrase (assignment
// of a different variable name is essential for a successful conversion):
var cryptKey = String(passphrase);
var cryptKeyLen = cryptKey.length;
// Create a "pseudo hash" of the given crypt key (= pass phrase):
for(var i = 0; i < cryptKeyLen; i++) {
pseudoHash = pseudoHash^(cryptKey.charCodeAt(i));
}
// Walk letter-wise through the org. text and get their ASCI value:
for(var i = 0; i < orgLen; i++){
var value = orgtext.charCodeAt(i);
// Double encrypt the current letter with 2 nested XOR operations.
// The first XOR operator uses the "pseudo hash" and the second
// the code value of the previously encrypted letter:
value = (value^pseudoHash)^preVal;
// Store the current code value temporarily for the next loop:
preVal = value;
// Append the XOR-result as 2-digit HEX-value to the output string:
output += (value < 16 ? '0' : "") + value.toString(16);
}
// Return the encrypted output:
return output;
}
You may try this example with the original text "stresslessness" and look at the code. There is 7 times the letter "s" and 3 times the letter "e" encoded but its hard to find equal ciphers of them - quite good, eh?!
But we have still that poor encryption problem via XOR values which can be easily evaluated with my "hack" script" above!!
I insist on XOR encryption because it is uncomplicated and extremely fast. Generating "real" hash codes with dozens of digits requires complex calculations and correspondingly expensive processing. However, if the original password is chopped up into pieces and "XOR'ed" individually, the sum of all the little keys results also in a nearly unique hash - so I did in my 3rd (and final) example:
function selfCrypt(orgtext, passphrase) {
var value, preVal, cutPos = 0, tmpHash = 0, output = "";
var pseudoHash = new Array(), orgLen = orgtext.length;
var cryptKey=String(passphrase);
var cryptKeyLen = cryptKey.length;
// Create an array of different "pseudo hashes":
// Therefore truncate the crypt key incrementally from left to right and
// create a specific hash value for each version. In this way, we hope to
// get significantly different values from the starting hash at each pass
// (which is highly probable, except keys like "xxxx" or "0000" are used).
for(var i = 0; i < cryptKeyLen; i++) {
for(var k = cutPos; k < cryptKeyLen; k++) {
tmpHash = tmpHash^(cryptKey.charCodeAt(k));
}
// Fill each hash version into a fast accessible array,
// update the counter, and reset the temp. hash variable:
pseudoHash.push(tmpHash);
cutPos++;
tmpHash = 0;
}
// Let the encryption begin!
// To do this, the variable “cutPos” is used as a pointer for the
// hash array. We start at the end of the array and jump back step by
// step after each run.
cutPos = cryptKeyLen;
// Walk letter-wise through the org. text as usual:
for(var i = 0; i < orgLen; i++){
var value = orgtext.charCodeAt(i);
// Double encrypt the letter like we did in the 1st example, but...
// ...watch out: Instead one single "pseudo hash" we have now
// varying values from the hash array on each loop:
value = (value^pseudoHash[cutPos])^preVal;
// Store the current code value temporarily for the next loop
// and set the next lower pointer position. If the zero position
// is reached, reset the pointer at it's end position:
preVal = value;
cutPos--;
if (cutPos < 0) cutPos = cryptKeyLen;
// Prepare the output string as usual:
output += (value < 16 ? '0' : "") + value.toString(16);
}
// Return the encrypted output:
return output;
}
The resulting code is almost impossible to crack, as it provides no external clues to any patterns. The algorithm presented here can be placed openly in any type of document without risk, as it is not the script itself that keeps the key, but only user's password and the self encrypting text!
...aah - you want to decode this scrambled stuff too??
OK, here we go:
function selfDecrypt(cipher, passphrase) {
var value, memVal, preVal, cutPos = 0, tmpHash = 0, output = "";
var pseudoHash = new Array(), cipherLen = cipher.length;
var cryptKey = String(passphrase);
var cryptKeyLen = cryptKey.length;
// Generate the "pseudo hash" array:
for(var i = 1; i < cryptKeyLen; i++) {
for(var k = cutPos; k < cryptKeyLen; k++) {
tmpHash = tmpHash^(cryptKey.charCodeAt(k));
}
pseudoHash.push(tmpHash);
tmpHash = 0;
cutPos++;
}
cutPos = cryptKeyLen;
// Walk through the hex cipher in steps of 2 char's:
for(var i=0; i<cipherLen; i+=2){
// Get the decimal value of the current 2-digit hex:
value = (parseInt(cipher.substring(i,i+2),16));
// Hold this code value temporarily for the next decryption loop:
memVal = value;
// Decrypt the code with the double XOR algo:
value = (value^pseudoHash[cutPos])^preVal;
// Store the temp. hold original code for the next loop:
preVal = memVal;
// Update the control variables:
cutPos--;
if (cutPos < 0) cutPos = cryptKeyLen;
// Convert the decoded value into a guilty ASCII-character
// and append it to the output string:
output += String.fromCharCode(value);
}
// Return the plain text:
return output;
}
... that's it!
Due to its simple structure and the widely known commands, this script can be easily translated into other program languages (e.g. PHP, Phyton or even VBA)!
Have fun and a nice day!
P.S.:
Hash codes always come with a "collision risk", which means two different passwords might accidentally generate the same hash. This very vague risk could also occur with my "pseudo" hashes, but is even less likely, as this method always uses different lengths of the original password. It may be that some sections generate identical hashes, but in their sum as an array chain, this will hardly be the case. Alan Turing's "bomb" would be suitable for detecting such collisions - but who has such a loud, huge thing lying around their basement? ;-)
Try to update lombok to a newer version, i had same error message after switching to a newer java version
New feature Use scoring profiles with semantic ranker in Azure AI Search. https://stackoverflow.com/a/79711514/9885226
Yii2 now uses MailerInterface
which not contains render
method.
But you can annotate var and magic
/** @var yii\mail\BaseMailer $m */
$m = Yii::$app->mailer;
echo $m->render('auth/code', ['code' => 123456], 'layouts/html');
I use PostgreSQL and had similar issue. You can try something like this:
override fun getDialect(connectionFactory: ConnectionFactory): R2dbcDialect {
return org.springframework.data.r2dbc.dialect.MySqlDialect()
}
Did you finish your project?
Been working on a similar type of project now and ended up here
Curious to know, did it work?
Same for me, no mat-expansion-panel animation, and loader, also the checkbox looks different
trackCircuitReducer: trackCircuitReducer
Thanks for you :) Thanks for you :) Thanks for you :)~~~~
try adding this field: tool_choice = {"type": "tool", "name": "tool_name"}
You can not use where clause on non unique fields as described in the documentation https://www.prisma.io/docs/orm/prisma-client/queries/crud#update. I was also facing the same problem and had to use @unique on my schema.
I have the same issue: the CSS :focus
selector is not being considered at all.
This src() function is not implemented in browsers (for an unknown reason), as you can see here: Compatibility of url() and src()
For anyone still encountering this problem.
New results field @search.rerankerBoostedScore
was added in a preview version.
It seems to connect scoring profile with semantic ranker.
Use scoring profiles with semantic ranker in Azure AI Search https://learn.microsoft.com/en-us/azure/search/semantic-how-to-enable-scoring-profiles
The issue actually lies in how I am interpretating the standard wording. In fact, it applies to any array, whatever the way it is created. As commented by Language-Lawyer, I should consider the reference to [decl.array] as a mere example.
You can however use element to get the last string:
message1 = element(split("/", "text1/text2"),-1) # get text2
await parseResult.InvokeAsync();
if (parseResult.Action.GetType() == typeof(System.CommandLine.Help.HelpAction))
{
return (exitCode: 0, shouldExit: true);
}
it's 3 years ago since this question was raised and I have the same. Are conditions now possible?
Thanks!
Eva
Expo 2025 you can just go:
import * as Audio from 'expo-audio';
const onPressListener = () => {
Audio.setAudioModeAsync({
playsInSilentMode: true,
shouldPlayInBackground: true,
});
Speech.speak(message.text)
}
You should use Recycleview, and set LayoutManager is LinearLayoutManager.
Had the same issue, and after trying many things the problem was in the windows registry. I was missing P9NP entry in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\NetworkProvider\Order and HwOrder.
Got it from https://github.com/microsoft/WSL/issues/4027#issuecomment-496628274
For me help (Ubuntu)
sudo apt install libfribidi-bin libfribidi-dev
bonjour, ayant eu le même pb je poste la solution que j'ai trouvé :
SELECT
name = 'Test',
ref = null
FOR XML RAW, ELEMENTS XSINIL;
I think you are using keras-tuner==1.4.7 with an earlier version of tensorflow (<2.6). Use keras-tuner==1.3.5, it worked for me with tensorflow==2.2.2
llama.cpp doesn't have a /v1/generate
endpoint, so the server will respond with a 404 error. Use the Open AI-compatible /v1/chat/completions
endpoint instead.
As for your question
What is the right way to use @MainActor in swiftUI
The Answer is
Correct Usage of @MainActor
and MainActor.run
@MainActor
func getPendingTaskList() async {
// UI update
viewState = .shimmering
do {
// API call
let response = try await useCase.getPendingList(
responseType: [PendingTaskResponse].self
)
// UI update
setupDisplayModel(response)
viewState = .loaded
} catch(let error) {
// Handle error and update UI
debugPrint(error.localizedDescription)
viewState = .loaded
}
}
Why This Works:
The @MainActor
annotation ensures that the entire function is executed on the main thread. No explicit await MainActor.run
block is needed for the UI updates.
When to Use await MainActor.run
If you are in a context that is not already running on the main thread, and you need to perform UI updates, you can use await MainActor.run
to switch to the main thread. For example:
func fetchData() async {
do {
let response = try await useCase.getPendingList(
responseType: [PendingTaskResponse].self
)
// Switch to the main thread for UI updates
await MainActor.run {
setupDisplayModel(response)
viewState = .loaded
}
} catch(let error) {
await MainActor.run {
debugPrint(error.localizedDescription)
viewState = .loaded
}
}
}
await MainActor.run
Here?fetchData()
is not annotated with @MainActor
, so it executes on a background thread (e.g., where the API call happens). We use await MainActor.run
to ensure that the UI updates happen on the main thread.@MainActor
and MainActor.run
Use @MainActor
for Tasks Involving UI Updates:
Annotate a method with @MainActor
if most of its operations involve updating the UI. This simplifies your code by eliminating the need for await MainActor.run
calls within the method.
Use await MainActor.run
Only When Necessary:
If you're in a background thread (e.g., during a long-running task or API call) and need to perform UI updates, use await MainActor.run
. Avoid using it unnecessarily within @MainActor
-scoped methods.
Keep UI Updates Minimal:
Minimize the amount of work that is performed under @MainActor
or MainActor.run
to avoid blocking the main thread.
And you also refer this link about the Actor and MainActor
https://www.avanderlee.com/swift/mainactor-dispatch-main-thread/
I am too getting the stats code 400. However I am using the WEB CONVERSE command with CONTAINER and CHANNEL options and believe FROMLENGTH is not required in this case. I have checked the JSON in the container is running file in POSTMAN. Any pointer to troubleshoot the issue or known solution to fix it?
Solved the issue. The parent of this component had an animation on it, and as I found out while going through to doc, animations on parent block animations on child unless explicitly disabled. I tried that still did not work in my case. So I moved a couple of things around and merged both animations into one. Started working great.