Thanks for the comments,my assumption was that arrays internally were no more than a special const * type
that happened to point to a larger area of memory, clearly I was wrong.
(A scuffed "solution" that actually works)
void foo(){
static const int * tree1[] = {
new int[2]{FULL,1},
new int[2]{FULL,2},
new int[2]{EMPTY,4},
new int[2]{EMPTY,5},
new int[2]{EMPTY,3}
};
static const int * tree2[] = {
new int[2]{FULL,1},
new int[2]{EMPTY,2},
new int[2]{FULL,3},
new int[2]{LEFT,4},
new int[2]{LEFT,5},
new int[2]{EMPTY,6},
new int[2]{RIGHT,7},
new int[2]{RIGHT,8},
new int[2]{EMPTY,9}
};
static const int ** whydoesthisnotwork[]= {tree1,tree2};
}
There is no need to get paths but a similar feature,it's called segments I suppose.
plt.contourf(ds.TIME1,ds.DEPTH,ds.temp,levels=(np.arange(50,450,40)),cmap="hot)
contour = plt.contour(ds.TIME1,ds.DEPTH,ds.temp,levels=[23])
contour.allsegs[0][1] # this will give all the x,y data points for segment, and so on for rest of the segments.
Thank you for your question.
As you have mentioned, the logs are correlated to the calls through traceId at the moment. Currently we cannot use any configurations or auto-assigment to correlate the logs to the Instana services. This is a gap and we acknowledge it.
We are continuously improving our logging area and we plan to fix this gap as well. We do not have any timeline for this, as of now.
I think this can be handled with any of the bel:-
(1) Providing Optimal icons in the web app manifest.
(2) Placeholder splash in your application :- Create a custom in-app splash screen that mimics the browser’s splash screen.
(3) Fallback Approach :- If precision is critical but measurements aren't exact Try doing transition your animation from a larger generic icon that fades/scales down to the estimated size based on the screen.
This are the approaches which i thought would work, though i never tested the same, or implemented this way before.
builder.Services.AddHsts(options =>
{
options.Preload = true;
options.IncludeSubDomains = true;
options.MaxAge = TimeSpan.FromDays(60);
options.ExcludedHosts.Add("example.com");
options.ExcludedHosts.Add("www.example.com");
});
Turns out this is a bug in recent chromedriver which is supposed to get fixed with next release:
https://issues.chromium.org/issues/379584343 https://chromium-review.googlesource.com/c/chromium/src/+/6063257
Use below code and it will work
<Route path="/report/:id" element={<ReportView key={Math.random()}/>} />
If you are using component
<Route path="/report/:id" component={<ReportView key={Math.random()}/>} />
Just wanted to close the loop on this.
I am not sure at which point exactly things go wrong, but the cause is apparently a bug in the installation routine of the plugins CMake script.
I was able to get my project to link with the release build of the plugin by uninstalling everything, and then installing only the the release build. The downside to this is, that now for both build configurations (release and debug), my project will only link to release. This is good enough for me for the moment.
Many thanks again to @Tsyvarev for helping me to debug this. I was not able to fix the plugins behavior through any of the CMake settings mentioned, unfortunately.
To do this please include image files (like logos) with your .exe
, place them in the same folder as your Python script and use relative paths in your code. When using auto-py-to-exe
, ensure you specify these files in the "Additional Files" section to bundle them with the executable.
I'd like to create a system for myself that will allow me to lock the doors of my home، please help me، I need all the tools I need.
use @.disabled
for html element as mentioned in angular documentation
When an element within an HTML template has animations turned off using the @.disabled host binding, animations are turned off on all inner elements as well. You can't selectively turn off multiple animations on a single element.
import openpyx1
ModuleNotFoundError: No module named 'openpyx1'
Why?! I got the following enter image description here
that confirms (?) that the module is installed ...
Solved by creating a multi-stage Dockerfile which uses the initial stage as the builder and copies over the files from that stage.
As suggeste by @Geert Bellekens I had a try with Document Script Fragment and managed to document enumeration data types in a table (each enum in separate row, code and description in separate columns).
my solution approach:
Create a custom template with a template fragment 'templateSelector' in the element >
section.
The fragment 'templateSelector' contains just a custom >
section
In the Document Options of the fragment, go to 'Custom Query' and select 'Document Script' as template fragment type and select via the dropdown the Custom Script ('getAttributeAndType') to get the attributes and their types (see further below). Call the script getAttributeAndType(#OBJECTID#)
in the window and pass the #OBJECTID#
Create a Custom Script. The Custom Script 'getAttributeAndType' gets the attribute name and notes and calls two other templates 'template_simpleType' and 'template_enumType' depending on whether the data type is simple (e.g. FLAG) or an enumeration.
!INC Local Scripts.EAConstants-JScript
var ENUM_TYPE_TEMPLATE = "template_enumType";
var SIMPLE_TYPE_TEMPLATE = "template_simpleType";
function getAttributeAndType(objectID)
{
var docGenerator as EA.DocumentGenerator;
docGenerator = Repository.CreateDocumentGenerator();
if ( docGenerator.NewDocument("") )
{
try
{
var currentElement as EA.Element;
currentElement = Repository.GetElementByID(objectID);
var attributesCollection as EA.Collection;
attributesCollection = currentElement.Attributes;
for ( var i = 0; i < attributesCollection.Count; i++) {
var currentAttribute as EA.Attribute;
currentAttribute = attributesCollection.GetAt(i);
docGenerator.InsertText("Attribute: " + currentAttribute.Name,"");
docGenerator.InsertText("\n","");
docGenerator.InsertText(currentAttribute.Notes,"");
var currentAttributeTypeElement as EA.Element;
currentAttributeTypeElement = Repository.GetElementByID(currentAttribute.ClassifierID);
if(currentAttributeTypeElement.Type == "Enumeration")
docGenerator.DocumentElement(currentAttributeTypeElement.ElementID,1,ENUM_TYPE_TEMPLATE);
else if(currentAttributeTypeElement.Type == "DataType")
docGenerator.DocumentElement(currentAttributeTypeElement.ElementID,1,SIMPLE_TYPE_TEMPLATE);
else Session.Output("Data type not recognized");
}
}
catch(e)
{
Session.Output(e);
}
var rtf = docGenerator.GetDocumentAsRTF();
return rtf;
}
return "";
}
The two templates being called by the script look as follows:
template_enumType:
template_simpleType:
All in all, you need a three templates, one fragment and one script. The first template calls the fragment, the fragment calls the script, the script calls the other two templates.
It looks good the code is great and has no issues.
try switching to a different environment than replit but if you wanna stay on replit try troublshooting by usingcout << "Hello, World!" << endl;
You need to create a Custom Converter to achieve this.
You can refer to this link for a example on how to create a custom converter.
And here you'll find how to use the Custom Converter you have implemented along with the reactive property
Don't forget that you need to keep the instance of the deserialized object in order to really use the Reactive Property. If you deserialize the object every time, you'll override or create a new instance of the properties, losing the observers. This may be useful for it.
I had the same problem with Azure Container Apps. I also used registry.hub.docker.com an it works instead of using docker.io . It works in combination with my private Docker repository.
The error is likely related to Replit's environment. You can consider these steps:
-Restart the Replit session or refresh the page, as temporary glitches can cause this issue.
-Create a new project and copy your code into it, as misconfigurations can happen in the environment.
-Check the compiler version by opening the Shell in Replit and running g++ --version.
If there's an issue with the compiler setup, this might help identify it.
I fixed this in my project by add a username and password in application.properties. Before that the setting of spring.datasource.username and spring.datasource.password are empty.
I stumbled upon this question when I was looking for why all my custom views turned white on the storyboards. It turns out this feature was unfortunately deprecated and removed in Xcode 16: https://developer.apple.com/documentation/xcode-release-notes/xcode-16-release-notes
binding.Security.Mode = ServiceModel.BasicHttpSecurityMode.TransportCredentialOnly binding.Security.Transport.ClientCredentialType = ServiceModel.HttpClientCredentialType.Ntlm binding.Security.Transport.ProxyCredentialType = ServiceModel.HttpProxyCredentialType.Ntlm
You need to disable compression in the Next app: https://nextjs.org/docs/app/api-reference/next-config-js/compress
Then enable it in Frontdoor instead: https://learn.microsoft.com/en-us/azure/frontdoor/standard-premium/how-to-compression#enabling-compression
As an added bonus this will offload the Next app, saving some CPU, and likely making it respond slightly faster.
how about using this:
tbl %>%
as_gt() %>%
fmt(
columns = everything(),
fns = function(x) {
# Convert numeric values to use comma as decimal separator
ifelse(is.numeric(x),
format(x, decimal.mark = ",", big.mark = "."),
x)
}
)
Or even transforming the table before gt:
tbl <- tbl %>%
mutate(across(where(is.numeric),
~format(., decimal.mark = ",", big.
Or even using string replace:
tbl %>%
as_gt() %>%
text_transform(
locations = cells_body(),
fn = function(x)) {
str_replace(as.character(x), "\\.", ",")
}
)
Could you provide some test data?
I have had this error in version pyasn1 == 0.6.1, for fix now I use 0.6.0.
This thread answers the question:
tasks = [self.method(i) for i in list] await asyncio.gather(*tasks) for given problem
Open the .env file located in the root of your Laravel project and update the database configuration to use MySQL: main modification is modify sqllite to mysql and write the command : php artisan migrate
Check here - this is possible without any additional code: https://formidableforms.com/knowledgebase/automatically-populate-fields/
In Hyperband
, the objective function evaluates the metrics
defined in the
compile function
. I added the mean_squared_error
metric while building the
model, and after making these changes, the code works as expected. Please
refer to this gist for the implementation.
Do not add URLs like https://www.websitename.com/profiles/ if they redirect to another page
If multiple pages have similar content but different parameters, use the tag to point to the main version of the
page
USE xml sitemap generator for magento 2
Hope this will help you and save a lot of your time https://github.com/JasurIsroilov/Django_project_template
In the screenshot they mentioned that your channel has some reused content. I think that's the issue. Once you rectify it, I think you will be all good. There must be solutions for that go through their guidelines.
If you want to remove certain item you can do
Query.select("-password")
But if you want to remove certain items from populated items you have to do for eg: if you are populating from a field named userID
Query.populate(path:"userID", select:"-password")
In your query you have not mentioned that you want to filter data from 2020 , since you are using sysdate it will give current date and time thats why your query is not working, based on current year it is checking 2024 . try something like this to get all records from 2020:
SELECT * FROM commandes WHERE EXTRACT(YEAR FROM datecommande) = 2020;
In my case a hint from Laracast forum solved it.
They supposed to add 'scheme' => 'smtp' in config/mail.php in 'mailers'=> 'smtp' section.
Thanks i had just figured it out but you are right the problem was my RDWR flag as the proc files are in read-only mode.The correct flag was RDONLY.
I think in Intune you have to use a complete .mobileconfig if you want to use the vendor settings.
Profile type -> Templates -> Custom
You can also download the .mobileconfig from Jamf and test this here. If you want to remove the signature you can run this:
openssl smime -inform DER -verify -in Settings.mobileconfig -noverify -out ~/Unsigned.mobileconfig
All the methods mentioned here are correct, however it is now implemented in Java by default. Just set the locale(if you want):
Locale.setDefault(Locale("EN", "US"))
So use java.time.temporal.WeekFields with default Locale to get first day of week.
WeekFields.of(Locale.getDefault()).firstDayOfWeek
The same issue appears on my Sentry dashboard. After searching for "unhandledrejection" in Chrome DevTools, I identified that the CustomEvent
is most likely created by scripts initiated by gpt.js (Google Publisher Tag)
Are you reading the test.dat
file using a text editor? You're correct in that the program should be writing the string to the file as binary, but if you're opening the file in a text editor, the text editor is probably interpreting the binary as ASCII characters, and displaying them as such.
Make sure you preserve client IP address:
https://docs.konghq.com/kubernetes-ingress-controller/latest/guides/security/client-ip/
Cloudfront and Firewalls on the way to gateway may replace Host IP with their own and pass client IP in another header like X-Forwarded-For, you should configure your Kong to copy real IP.
It's possible you haven't added this new Widget to your Dashboard, in app/Providers/Filament/AdminPanelProvider.php
.
public function panel(Panel $panel): Panel
{
return $panel
// ...
->widgets([
// your Dashboard Widgets go here
]);
}
For me the issue was that I originally created the file with a different file extension. So not with .cs. If you rename the file the paste special option will not come back. You need to remove and recreate the file. Once I did that the paste special option was back.
same issue.....do you have any solution?
Export Your Project Code :- In FlutterFlow, export your project by navigating to Settings > Export Code. Download the code as a ZIP file to your local machine.
Edit the index.html File :- Unzip the downloaded project. Navigate to the web directory of your project. Open the index.html file in a text editor or an IDE like VS Code. Paste the HubSpot embed code right before the closing tag.
After adding the code, save the index.html file. Run flutter build web in your terminal to rebuild the web app with the updated index.html.
I would try to debug that with good old console.log and print out if that element is returning true to isVisible() function
Other than that you can always waitFor some state changes like await this.element.waitFor( {state: "visible"} ) Additionally you can add a bigger timeout
let text= document.getElementsByTagName('body')[0].innerHTML;
let regExp = /<[a-zA-Z\/][^>]*>[a-zA-Z0-9&._ -]*<[a-zA-Z\/][^>]*>|<[a-zA-Z\/][^>]*>/g;
let formattedText= text.replace(regExp, '');
formattedText
will be your output
Ask ccAveneu team to whitelist the URL for your server as well as for localhotst. You can mail them using [email protected] Refer to this document : https://www.aravin.net/article/complete-guide-integrate-ccavenue-payment-gateway-asp-net-website-screenshot
The from_existing_index
has deprecated and you can perform the same by using
pc = Pinecone(api_key=api_key)
index = pc.Index(index_name)
index.describe_index_stats() ## For verification
You can refer to the Pinecone official Notebook langchain-retrieval-augmentation.ipynb
TLDR: Please take a look at the resource constraints guide: https://developers.google.com/optimization/routing/cvrptw_resources
found out any solution or not. bcoz my friend have same laptop and were running into same problem.
The !pip install tensorflow-gpu and tf-nightly-gpu were removed from tensorflow v2.12. We can directly use pip install tensorflow for the gpu. https://pypi.org/project/tensorflow-gpu/ Thank you!
Set type in Python is basically implemented as a HashTable.
There're so many great answers above, I'm just gonna state a point that is missing:
I think one more thing is that the size of hashtable is predefined, and adding elements beyond the current capacity triggers a resize of the hash table.
For the complexity part (HashTable):
Amortized:
O(1) for individual additions because resizing is not as often.
Worst case:
O(n) resizing involves rehashing all elements into a larger table, which is a linear operation.
If you did everything those guys suggested and still got the error, you should also check if you ever lost or changed your SHA-1 release key for your app in the Play Console. You need to use the first key that was used for the app signing certificate and paste it to firebase fingerprints. This resolved my issue.
You need to add the command line option --output-filename
.
Your command would then look like this: python -m nuitka --module some_module.py --mingw64 --no-pyi-file --remove-output --output-filename=test_module
Even now with the latest version of vSphere 8 this is a problem.
The best way to do this is to generate the DLL without the serialization assembly (which compiles quickly) and then generate the serialization assembly separately using sgen.exe.
sgen.exe takes ages to run but because it's running outside of the development evenvironment it does complete successfully.
It's quite faffy but fine when it's all completed - I've documented the whole process here:
https://david-homer.blogspot.com/2024/11/solved-compile-vsphere-management-sdk.html
i think you need to write nullable() befor constrianed('tabel name')
example $table-> foreignId (' parent_id') -> nullable ( ) -> constrained ('categories','id') -> nullOnDelete ()
I had to pass the GIT Personal Access Token while cloning the repo..
- name: Checkout
uses: actions/checkout@v4
with:
token: ${{ secrets.GIT_TOKEN }}
From then on I am able to commit/push with the git user which has admin access to the repo and effectively able to bypass the "Require a pull request before merging" check.
I referred to the following: https://www.paulmowat.co.uk/blog/resolve-github-action-gh006-protected-branch-update-failed
Use NAS Synology server at your home and set up all as your office server
Based on @Elijah's suggestion about registering IOptions
, here's a complete solution that shows how to properly configure and use JSON serialization options globally.
First, configure the options in Program.cs
:
services.Configure<Microsoft.AspNetCore.Http.Json.JsonOptions>(options =>
{
options.SerializerOptions.PropertyNameCaseInsensitive = true;
options.SerializerOptions.NumberHandling = JsonNumberHandling.AllowReadingFromString;
options.SerializerOptions.WriteIndented = false;
options.SerializerOptions.MaxDepth = 18;
options.SerializerOptions.AllowTrailingCommas = true;
options.SerializerOptions.ReadCommentHandling = JsonCommentHandling.Skip;
});
Then inject and use these options in your services.If you need you can ovveride in constructor.:
public class TestService : ITestService
{
private readonly JsonSerializerOptions _jsonSerializerOptions;
public TestService(IOptions<JsonOptions> jsonOptions)
{
_jsonSerializerOptions = jsonOptions.Value.SerializerOptions;
}
public async Task<ModelDto> DeserializeObjectAsync(MemoryStream ms)
{
return await JsonSerializer.DeserializeAsync<ModelDto>(
ms,
_jsonSerializerOptions
);
}
}
Did you get any idea about it?
You might want to try using click_and_hold
instead of pointer_down
, or maybe using pointer_up
rather than release()
might work better for you
wow wow wow, thanks,it really works
very good answer, in my case, and it worked perfectly. Thanks for this.
Please take a look, I just launched a cool URL escape tool on OfficeEssence.net. It’s perfect for encoding or decoding URLs easily. Check it out and let me know what you think! 👉 OfficeEssence.net 😊
Does this fix the connection issue to eduroam networks on rooted android 11 phones? If it does, can i get an easy tutorial to where to put these codes in?
Koin retains a single instance of the object in memory throughout the lifecycle of the app, i.e., until the app closes. Once the object is created, Koin will reuse the same instance whenever it is injected. It is essentially a long-lived object.
This means Koin will hold a reference to the object for as long as the app is running (or until explicitly removed from the Koin container), so it is not discarded or garbage collected until the application shuts down.
Factory: A factory in Koin is a function that returns a new instance of an object each time it's requested.
When you define a factory in Koin, the framework creates a new instance each time the object is requested. These instances are discarded immediately after use (i.e., they are not retained in memory by Koin).
The object is typically eligible for garbage collection once it goes out of scope (when no longer referenced). Koin does not retain the object in memory after it’s no longer needed, which means it is short-lived.
Scoped: A scope in Koin allows you to define an object that lives only within a particular scope (e.g., a specific activity, fragment, or component in your app).
Objects defined in a scope will be created when the scope is created and retained until the scope is closed. When the scope ends (e.g., the activity or fragment lifecycle ends), Koin will release the object and it becomes eligible for garbage collection.
Koin will retain the object within the scope, and when the scope is closed, Koin ensures that the object is disposed of correctly. This ensures the object is not retained longer than necessary and avoids memory leaks associated with unused objects.
Koin itself does not directly cause memory leaks, but improper handling of scopes or object references could lead to leaks. For example:
If you keep a reference to a Koin-provided object (e.g., a single or scoped object) outside of the lifecycle of the scope in which it was created (for example, keeping a reference to an activity-scoped object after the activity is destroyed), the object won't be garbage collected even if it’s no longer needed, causing a potential memory leak. Another possible issue could be if you forget to close a scope manually, causing objects within that scope to remain in memory longer than expected.
Filament's infolist package allows you to render a read-only list of data about a particular entity.
You can have a custom View page with a form or you can have the Infolist, but you can't have an editable Infolist, that's just how it works.
Take a look at this documentation link: Creating another View page.
Here is our solution to this problem.
When restarting the 3rd primary broker, we noticed that messages were accumulating in the internal queue between the 2nd and the 3rd broker ($.artemis.internal.sf.amq-cluster.).
When looking at these messages in the console, we found 1 business related message duplicating. This business message was sent via the console and was supposed to be sent on one of our existing addresses but was supposely sent on the internal queue by error. In the listing of messages in the console, we could see this message duplicating but with the same messageId.
Our solution was to purge the internal queue and as soon as we did that, the 3rd primary broker was available again and synced with its backup server instantly.
First get the whole string and try to use a stack to store the numbers and operators. Then try to use postfix notation to calculate the expression.
I have successfully pip installed and used 'oracledb' module to connect to Oracle database from python. Thanks for all the helpful answers and guidance!
Error: headers
was called outside a request scope. Read more: https://nextjs.org/docs/messages/next-dynamic-api-wrong-context im getting this issue, i tried to use workos for login and sign up but im getting this. someone please help me
let string = "your_text";
string = string.replace(regExp, (url) => "<a href='" + url + "'>" + url + "</a>");
try the above solution. It will replace all the URLs in your string with an anchor tag
My guess would be that you're not deleting all the volumes and images. I faced the same issue with Vue, and btw it actually should work when you rebuild, but it wasn't. So I just deleted the image itself and volumes. If you type docker system prune -af --volumes
there were still volumes present, the command doesn't entirely remove all images, volumes and containers. I had to type docker volume prune -a
for all the volumes to be removed. So you can try delete everything as you tried before, but before building again, just type docker volume ls
and docker image ls
to be sure.
I have got this, to check, if you have voted or not
$post->voters->find(auth()->id())->exists(); //will return true or false
Now, if you are more willingly looking for is it up or down vote
$post->voters()->where('user_id', auth()->id())->where('type', 'up')->exists();
$post->voters()->where('user_id', auth()->id())->where('type', 'down')->exists();
Try this:
File > Invalidate Caches... > Invalidate and Restart
this helped me
You can send message with media like .gif file.
await bot.sendAnimation(chatId, defaultIMG.file_id, {
duration: defaultIMG.duration || 2,
width: defaultIMG.width || 640,
height: defaultIMG.height || 640,
thumb: defaultIMG.thumb.file_id,
caption: strBotDM, // Add the caption here
parse_mode: "HTML",
});
Please check this github repo. https://github.com/Any-bot/D2T-msg-forward
Also https://dev.to/plzbugmenot/building-a-discord-to-telegram-token-address-forwarder-8m
I wanna hear from you.
Thank you.
Headers are added in later versions of .NET Core 8 and 9 only for the Apps deployed in Windows.
Linux App:
Also referred this Doc and tried to remove the header using the Middleware class.
Thanks @Thomas Ardal for the explanation.
ASP.NET
Core will return theX-Powered-By
header. This happens when you host your website on IIS. you simply cannot remove the header in middleware, since this is out of hands forASP.NET
Core.web.config
to the rescue:
Output with Web.config
:
Web.config
file will be created in the application root directory.Also refer this blog for more details.
Easy, I was missing the preceeding A
from AWS_SECRET_ACCESS_KEY
Может кому поможет. Просто добавьте в список еще один столбец (в случае автора managers).
There is a tool that allows you to analyze Nextflow workflows and generate the DAG. It does this statically (i.e., without executing the workflows). Additionally, it supports DSL1 workflows.
Here is the link to the website: https://bioflow-insight.pasteur.cloud/ And the associated paper: https://doi.org/10.1093/nargab/lqae092
Hope this helps!:)
Figured out the WA for this: Use below config in the configmap of ingress-controller: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#proxy-ssl-location-only
if you are facing this issue git fetch will resolve it
git fetch
This worked for me
INSERT INTO x WITH y AS (SELECT * FROM numbers(10)) SELECT * FROM y; WITH y AS (SELECT * FROM numbers(10)) INSERT INTO x SELECT * FROM y;
For someone in the future who still has doubts:
Updates for info.cukes stopped, and all future versions were released under the io.cucumber namespace.
io.cucumber is the current version of the Cucumber libraries. It is actively maintained and includes all the latest features and bug fixes for integrating Cucumber with Java projects.
This library is modular, which means you can add only the required dependencies (e.g., cucumber-java, cucumber-spring, cucumber-junit) instead of including a monolithic library.
You should not have a
tag wrapping other
tags. Each
(paragraph) tag should be used individually use following method
<div>part1</div>
I found that the 'when' condition, even though seeming correct was preventing the mapping from triggering.
!notebookEditorFocused && editorLangId == 'markdown'
I removed this and it works as expected now... maybe not the best fix but it got the job done.
try using --pretty option for formatting and to add new line you have to use %n . your command should look something like this :
git log --oneline --pretty=format:"%h %s%n"
in above code: %h is to show hash commit , %s is to show commit message and %n is new line.
As of December 2024, there is no built-in option to build or serve multiple application projects. We would like to introduce a new type "component" that would probably fit your needs here. This is envisioned to be introduced next year.
However, for the time being you might try the UI5 community middleware ui5-middleware-ui5 which is a workaround for this limitation.
I've found the solution. The problem was silly, because while accessing /api/auth/login I was adding "Authentication" header. Because there was incorrect authentication data, CustomAuthenticationFilter was throwing exception and I got unauthorized. Sending request without "Authorization" header fixed problem. Of course for secured endpoints this header is required.
Wagtail modeltransaction doesn't work with snippets only Page models. Wagtail with Codered Crx is almost perfect with their Navigation Bars ... but there is one big problem. LayouSetting using one instance of Navbar ... so how could we do multilanguage ... ?
TranslatableMixin is one direction to help ... but we still have LayoutSettings related with Navbar model.
Strange that nobody figure out this problem (footers menus) yet.
For Visual Studio 2022 (17.9.7) and C++, I found this under:
TOOLS > Options... > Text Editor > C/C++ > Code Style > General > Generated documentation comments style
If I set that to "None", the documentation comment template did not appear anymore after typing ///
.
I have managed to resolve this issue. The reason for the error is very simple -
I had to remove 'i' from "res = getHistoricalData(symboldf.iloc[i])" to make it "res = getHistoricalData(symboldf.loc[i])". This will solve the issue.
candledfList = [] for i in symboldf.index[:-1]: candledfList = [] res = getHistoricalData(symboldf.iloc[i]) if res is not None: candledfList.append(res) finalDataDf = pd.concat(candledfList, ignore_index=True)
The for loop below is to generate single/separate file for each company stock data. We can remove this and move the logic to for loop above this and generate one single CSV file to capture all companies data. But the problem with this approach is we will lose data for several companies, not sure why. I lost 36 companies data when tried to capture all companies data into one single CSV file.
isCsv = True for symData in candledfList[:-1]: try: filename = symData.iloc[0]['symbol'] if isCsv: filename = f'{filename}.csv' symData.to_csv(folder + filename, index=False) del candledfList else: filename = f'{filename}.parquet' symData.to_parquet(folder_parquet + filename, engine='pyarrow') del candledfList except Exception as e: print(f'Error {e}')
Try unchecking Environment->Find and Replace->Automatically limit search to selection
Argh, it was about the probes. the automatic configuration was messed up.
Ceedling cannot handle this automatically, this is a bad dependency problem.
You have several solutions here, but the best is to include "Types.h" directly into "Functions.h".
If you really cannot edit the sources for some reason, you can add "Types.h" to your ceedling yaml file under [:cmock][:includes], this option can be used to inject header files into the generated mocks, see documentation.
func removeVowels(input: String) -> String {
let vowels: [Character] = ["a", "e", "i", "o", "u", "A", "E", "I", "O", "U"]
let result = String(input.filter { !vowels.contains($0) })
return result
}
for windows users with Ubuntu for windows, The commands has to be lounch from windows terminal for exemple powershell